{"id":302938,"date":"2026-02-26T09:33:08","date_gmt":"2026-02-26T09:33:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/302938\/"},"modified":"2026-02-26T09:33:08","modified_gmt":"2026-02-26T09:33:08","slug":"ai-starting-to-simplify-design-of-programmable-logic","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/302938\/","title":{"rendered":"AI Starting To Simplify Design Of Programmable Logic"},"content":{"rendered":"<p>Key Takeaways<\/p>\n<p>AI\/ML and agentic tools are getting better at helping design and compile FPGAs, but downstream programming is slower to benefit.<br \/>\nFPGAs historically have been designed using Verilog or VHDL, but higher-level languages could push more intelligence into compilers.<br \/>\nML tools can also help with mixed-signal co-design by automatically tuning DSP algorithms based on analog simulation data.<\/p>\n<p>AI is beginning to make inroads into designing and managing programmable logic, where it can be used to simplify and speed up portions of the design process.<\/p>\n<p>FPGAs and DSPs are still not as efficient as hard-wired chips, but they remain extremely useful in such markets as life sciences, AI processing, automotive, and 5G\/6G chips, where change is almost constant. Field programmability provides future-proofing for new protocols and standards, as well as for modifications to architectures, and it serves as a sort of blank canvas into which any workload can be inserted.<\/p>\n<p>\u201cThere is a programmable I\/O ring that sits around the chip, and you can take any type of I\/O that can come in and translate it into something that can be made into post-processing and workload-specific engines within that fabric,\u201d said Venkat Yadavalli, head of the Business Management Group at Altera.<\/p>\n<p>But designing FPGAs, eFGPAs, and DSPs is both complex and time-consuming. \u201cThere\u2019s a case for FPGAs to be used more widely than just in prototyping, in a particular function,\u201d said Andy Nightingale, vice president of product management and marketing at <a href=\"https:\/\/semiengineering.com\/entities\/arterisip\/\" rel=\"nofollow noopener\" target=\"_blank\">Arteris<\/a>. \u201cIn reducing memory and I\/O bottlenecks, they\u2019re ideal. But it\u2019s still quite a complex job to program FPGAs. You need RTL skills to program an FPGA versus programming software to run on the GPU to do a similar task.\u201d<\/p>\n<p>While FPGA engineers have optimized the way the bit streams go in and out, it requires a different software stack to manage it. \u201cCompanies such as Xilinx (now part of AMD) and Altera have built their core CPU clusters that bring them some more programmability, along with their FPGA fabric,\u201d observed Nandan Nayampally, chief commercial officer at <a href=\"https:\/\/semiengineering.com\/entities\/baya-systems\/\" rel=\"nofollow noopener\" target=\"_blank\">Baya Systems<\/a>. \u201cThey\u2019re trying to solve some of those programming problems, but it\u2019s very difficult to do a generic thing that goes across GPU, CPU, and FPGA. The more different software stacks you have, the more difficult it is to move faster.\u201d<\/p>\n<p>Today, all of that is managed by a software abstraction. \u201cThe programmability is controlled by the software layer that sits on top of it,\u201d said Yadavalli. \u201cFor FPGAs, we have a state-of-the-art tool that can take a workload, synthesize the workload, place the workload, and pack it in a way that you get the most power, area, and the best FPGA target that you can put in to get there. That tooling becomes the biggest competitive moat, and that\u2019s why there are not many people who can break through that, to really go and implement it. Chips can be built by anybody, but having a sophisticated software that gets it in [is hard], and that sophistication depends on how wide you want it to be and the different types of programmability that you want to bring in.\u201d<\/p>\n<p><img data-recalc-dims=\"1\" fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-24273161\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-24-at-9.33.26-AM.png\" alt=\"\" width=\"2274\" height=\"892\"  \/><br \/>Fig. 1: FPGA AI development flow. Source: Altera<\/p>\n<p>Looking ahead, agentic AI is expected to help speed up FPGA design, although it won\u2019t necessarily help users program the FPGAs for their products. \u201cWe are excited about AI opportunities that are ahead of us, which allows us to not be a ninja FPGA designer or a ninja ASIC designer,\u201d said Yadavalli. \u201cThere are agents that can convert my coding into vibe coding, where I can go in and input this information through either voice or with diagrams or schematics, you name it, and it goes through multiple rounds to spit out something that is final code. That\u2019s the Nirvana state. We are not here with agentic AI yet, but I\u2019m seeing that as an opportunity that encourages more people to come in and innovate on these platforms.\u201d<\/p>\n<p>AI adds complications<br \/>At the same time, there are challenges for first-time FPGA users, as well as for users who are familiar with FPGAs and are adding AI into the mix. \u201cProgramming FPGAs has gotten easier over time with things like high-level synthesis,\u201d said Rob Bauer, senior manager of the product marketing, adaptive, and embedded group, at AMD. \u201cThere are certain tools that engineering teams are using for algorithmic or C code down into RTL. From a tool flow perspective, we have capabilities such as Vitis AI that help bridge the gap between, say, a PyTorch model into the AI engine. That\u2019s critical so users can quickly deploy AI into the silicon. That has certainly got easier.\u201d<\/p>\n<p>Bauer has not yet seen a lot of AI-based RTL-generation code assistance. \u201cBut in terms of taking an AI workload and putting it in the chip, that\u2019s gotten a lot easier as we figure out what models we need to support and then work on compiler optimizations, quantizer, etc., to get down into the chip,\u201d he said.<\/p>\n<p>Others are seeing agents generating RTL. \u201cFor programmable components like FPGAs, AI-native compilers and agents infer intent from high-level code or natural language, generate RTL or HLS, and automatically optimize mapping, pipelining, and timing closure,\u201d said William Wang, founder and CEO of <a href=\"https:\/\/semiengineering.com\/entities\/alpha-design-ai-chipagents\/\" rel=\"nofollow noopener\" target=\"_blank\">ChipAgents<\/a>. \u201cCompilers are shifting to adaptive pipelines that optimize kernels, memory layout, parallelism, and scheduling in real-time as model architectures and operators change.\u201d<\/p>\n<p>Adding a discrete or embedded FPGA to an SoC is not necessarily difficult, but expertise is needed to make it all work \u2014 and with AI. \u201cYour downstream customer has a challenge of, \u2018What used to be purely a software job now involves designing some hardware that\u2019s going to go in that FPGA,\u2019 and that\u2019s a bit daunting,\u201d said Russell Klein, program director at <a href=\"https:\/\/semiengineering.com\/entities\/mentor-a-siemens-business\/\" rel=\"nofollow noopener\" target=\"_blank\">Siemens EDA<\/a>. \u201cSuddenly we\u2019ve got this interest in, \u2018We\u2019ve got this algorithm, it needs to go into that FPGA, and we might not have seasoned hardware designers to do it. Can we start looking at taking these algorithms and using the tooling that can take a C function?\u2019 This is instead of speeding up the design work, which is traditionally where high-level synthesis has played a role, so we\u2019re starting to look at some limited Python and being able to take that and compile it into the FPGA fabric. While traditionally you design an FPGA using Verilog or VHDL, there are higher-level approaches that are going to be a lot closer to what software developers need to be able to do to move things into the FPGA fabric and take advantage of that power\/performance capability.\u201d<\/p>\n<p>Another approach is to make the compiler smarter, putting more intelligence into it. \u201cThat way we can remove as much of that hardware design knowledge that\u2019s necessary to be able to program those FPGA devices,\u201d said Klein. \u201cBut we\u2019re not at that point yet. [AMD] isn\u2019t. Nobody in this space has an offering where a software guy can naively turn the crank on the compiler and get output. It requires some understanding of hardware design and data flow. It\u2019s not that software people can\u2019t learn these things. They absolutely can. It is possible for software people to start to look at this technology, start to understand it, and with some training be able to move these algorithms off the CPUs into the programmable logic. In the long term, this just becomes an extension of programming. \u2018I\u2019m going to write a program. Do I compile this to run on the CPU? Do I compile it to run on the GPU? Or do I compile it to run on the FPGA fabric?\u2019 That\u2019s the very long-term vision. Everybody who\u2019s playing in this space is making progress in that direction.\u201d<\/p>\n<p>One challenge is optimizing the FPGA for a specific workload, with the optimal power versus performance versus latency. \u201cThat is still a balance, because in the embedded space you want to optimize for cost as much as possible,\u201d said Bauer. \u201cYou could take these models and run them on your laptop, but it\u2019s not going to deliver the performance that you need in an edge system.\u201d<\/p>\n<p><img loading=\"lazy\" data-recalc-dims=\"1\" decoding=\"async\" class=\"alignnone size-full wp-image-24273160\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-24-at-9.32.21-AM.png\" alt=\"\" width=\"2300\" height=\"930\"  \/><br \/>Fig. 2: Pre-processing time in programmable logic vs. processors, where green is low latency, deterministic, and red is high latency, non-deterministic. Source: AMD<\/p>\n<p>There\u2019s a learning curve to deploy the AI, test it, and make sure everything\u2019s working correctly. \u201cThings are moving so quickly that the model you use today and evaluate today could be obsolete,\u201d said Bauer. \u201cThere could be a better model a year down the road, so you need something you can quickly adapt with. Different people are going to struggle with different things depending on the problem they\u2019re trying to solve.\u201d<\/p>\n<p>Shifting workloads and the role of programmability in AI models<br \/>If a designer knows exactly what model they are running, they design a very efficient AI accelerator to solve that problem, Baya\u2019s Nayampally said. \u201cModels change, so you need some programmability. Then, depending on the architecture of the accelerator, you have to add the software stack that abstracts it so enough that people don\u2019t have to relearn every time.\u201d<\/p>\n<p>Because the future remains unknown, some level of programmability is essential. \u201cIf you look at what Nvidia has done, it\u2019s still a GPU with acceleration,\u201d said Nayampally. \u201cThere is a lot of programmability. CUDA is what made them successful. How fast you can make that programmability in an optimization is what\u2019s driving success.\u201d<\/p>\n<p>As the landscape continues to evolve, these considerations highlight the dynamic interplay between programmability, efficiency, and adaptability in FPGA and AI system design. Still, while optimization is a key concern, the speed at which AI models are changing is starting to level off.<\/p>\n<p>\u201cFour or five years ago, when people were developing compilers for machine learning or AI workloads, they were fascinated by the possibility of a good compiler that could take any AI model architecture and convert it to an intermediate representation that is very efficient,\u201d said Kexun Zhang, head of research at ChipAgents. \u201cBut the effort going in that direction for smart compilers for AI models has become much less today, because the most important workload, or the largest amount of work for AI, is no longer people developing different model architectures and trying them out one by one. That was when people needed the compilers, because they needed to speed up all these different, strange, random architectures that people came up with.\u201d<\/p>\n<p>One of the most important workloads today is matrix multiplication performed by transformers, or the architecture underlying language models. \u201cAt least for language models, we don\u2019t really need the hardware to be that programmable, because they only need to deal with one type of workload,\u201d said Zhang.<\/p>\n<p>A designer\u2019s choice of programming language can impact efficiency, as well. \u201cThis is a problem in general, if you write your code with high-level languages like Python, you always will lose power,\u201d said Andy Heinig, head of the Department for Efficient Electronics at Fraunhofer IIS\u2019 Engineering of Adaptive Systems Division. \u201cThe power efficiency of these languages isn\u2019t as good as if you write in embedded or in C, C++.\u201d<\/p>\n<p>So while high-level languages can make programming easier, they may mean you lose power efficiency. \u201cFrom that perspective, we are quite sure that hardware-software co-design is the way to save most of the energy, but we are not seeing that happen because we need more abstraction to solve these issues,\u201d Heinig noted.<\/p>\n<p>FPGA design developments<br \/>In FPGA design, one challenge lies in creating tools flexible enough to serve vastly different applications. This has been partly solved by accessible and integrated software flows that enable AI developers, FPGA engineers, and embedded or SoC developers to collaborate within a unified design environment, Altera\u2019s Yadavalli noted.<\/p>\n<p>Analysis is getting easier, too. \u201cNew power and thermal analysis tools have become far more precise, providing intelligent recommendations to help designers better manage energy use and thermal constraints throughout the design and board layout process,\u201d Yadavalli said.<\/p>\n<p>While nominally digital, FPGAs are analyzed at a very analog level, similar to memory, CMOS, and image sensors. \u201cFPGA is only digital, but the analysis of how those fuses work and how the resistance and the components are analyzed, because it\u2019s a repetitive structure, can be done on a very deep level on each unit, and then that gets repeated,\u201d said Marc Swinnen, director of product marketing at <a href=\"https:\/\/semiengineering.com\/entities\/synopsys-inc\/\" rel=\"nofollow noopener\" target=\"_blank\">Synopsys<\/a>. \u201cThat analysis has a lot of analog aspects to it. The power delivery, the signal integrity, all of that has analog components to it, especially at high speed. The problem with all these components that have analog aspects of their analysis is that they are very large. But analog designs are traditionally small, and the tools are traditionally designed for small designs.\u201d<\/p>\n<p>New cloud-based tools and better infrastructure have enabled FPGA designers to analyze their full designs in full detail like never before, Swinnen said.<\/p>\n<p>Designing and deploying DSPs<br \/>FPGAs aren\u2019t the only programmable hardware option, or the only option challenged by AI. While AI makes it easier to design DSPs, there are rising complexities due to the increase in analog information from real-world sensors.<\/p>\n<p>\u201cMachine learning can help with mixed-signal co-design by automatically tuning DSP algorithms based on analog simulation data,\u201d said Amol Borkar, senior director of product management and marketing, head of computer vision\/AI products at <a href=\"https:\/\/semiengineering.com\/entities\/cadence-design-systems\/\" rel=\"nofollow noopener\" target=\"_blank\">Cadence<\/a>. \u201cThis reduces design cycles and helps engineers find the right balance between analog precision and DSP complexity.\u201d<\/p>\n<p>This complexity is leading to changes in how design teams approach analog and digital. \u201cIn the past, these worlds were separate, but now they need to work together,\u201d Borkar noted.<\/p>\n<p>Power and area tradeoffs are also front and center. \u201cAnalog blocks are efficient but hard to scale, while DSP-based fixes can improve performance but cost more in power and silicon,\u201d Borkar explained. \u201cDesigners need to strike a balance. Do you go with a high-resolution ADC to simplify DSP work, or a lower-resolution ADC and let the DSP do more heavy lifting?\u201d<\/p>\n<p>In edge AI deployment, developers must know what workloads to run on a traditional DSP versus a vector extension, such as Arm\u2019s Helium, optimized for ML on low-power embedded devices. On a fitness watch, for example, a high percentage of the audio processing is done on traditional DSP while a significant portion of pre-processing is done on a DSP Helium extension on an Arm Cortex M55 MCU, explained Steven Tateosian, senior vice president of the IoT, Compute &amp; Wireless Business Unit at <a href=\"https:\/\/semiengineering.com\/entities\/infineon-technologies\/\" rel=\"nofollow noopener\" target=\"_blank\">Infineon Technologies<\/a>. \u201cThe use case for that DSP is different than the audio processing. It becomes more of a pre- and post- filtering use case.\u201d<\/p>\n<p>The same questions apply to vehicles. \u201cAI does not solve your segmentation problem or your system architecture problem,\u201d said Thomas Rosteck, division president and CEO of connected secure systems at Infineon. \u201cIt provides you with a different way to analyze the data and then provide the feedback.\u201d<\/p>\n<p>Memory compilers<br \/>As AI models become more sophisticated and the industry shifts toward a software-first design methodology, advanced memory compilers are increasingly needed.<\/p>\n<p>\u201cChip architects now prioritize software algorithm requirements, especially those for machine learning and data analytics, before finalizing hardware specifications,\u201d said Daryl Seitzer, principal product manager for embedded memory IP at Synopsys. \u201cThe ability to quickly adapt memory architectures to support unique AI algorithms has become a key differentiator for chip designers. This shift drives the need for memory compilers that deliver flexible and scalable embedded memory solutions. As AI applications grow in complexity, there is an increasing reliance on specialized data structures resulting in more frequent and parallel access to large datasets, and memory compilers must now support features to accommodate these new software-driven demands.\u201d<\/p>\n<p>The latest generation of memory compilers offers highly flexible configurations, ultra-low voltage support, and a wide range of multi-port options, which gives chip designers confidence that their memory IP can quickly adapt to changes in algorithm requirements. \u201cAI-targeted memory features include transposed dataflows, power optimized designs for applications with data sparsity, and MAC unit pitch-matching,\u201d Seitzer added.<\/p>\n<p>Conclusion<br \/>FPGAs, DSPs, and other programmable chips play an increasingly important part in the chip landscape, where applications demand a complex mix of processors to achieve specific goals. New tools are making it easier for designers and customers to take advantage of programmability as AI models and applications continue to evolve.<\/p>\n<p style=\"font-weight: 400;\">\u201cFPGAs are driven by the technical architects deciding which portions are something that is applicable to FPGA technology and which portions are applicable to GPUs, an ASIC, or another chip,\u201d said Altera\u2019s Yadavalli. \u201cThat upfront discussion is what we call an architectural phase. People will look into it and partition the design to say, Which part of the data plane needs to be organized through FPGAs? Which part of the control plane needs to be set up the way it needs to be? Most importantly, does the total cost of ownership of this implementation make sense while balancing the needs of the market and the market evolution that is about to happen?\u201d<\/p>\n<p style=\"font-weight: 400;\">The main parameters that go in favor of FPGAs are I\/O flexibility, deterministic latency or low latency, security flexibility, and the ability to consolidate different workloads that you don\u2019t necessarily have control over. \u201cYou can architect your risk profile in a way that it makes sense from a platform level on how the workload can be easily orchestrated and arbitrated,\u201d said Yadavalli. \u201cThen, eventually, it has to make sense to the software layer that comes on top of it. It\u2019s a good software-hardware co-design.\u201d<\/p>\n<p>Related Reading<br \/><a href=\"https:\/\/semiengineering.com\/programmable-chips-evolve-for-shifting-needs\/\" rel=\"nofollow noopener\" target=\"_blank\">Programmable Chips Evolve For Shifting Needs<\/a><br \/>Designers are utilizing an array of programmable or configurable ICs to keep pace with rapidly changing technology and AI. DSPs remain key.<br \/><a href=\"https:\/\/semiengineering.com\/fpgas-find-new-workloads-in-the-high-speed-ai-era\/\" rel=\"nofollow noopener\" target=\"_blank\">FPGAs Find New Workloads In The High-Speed AI Era<\/a><br \/>Growing use cases include life science AI, reducing memory and I\/O bottlenecks, data prepping, wireless networking, and as insurance for evolving protocols.<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"Key Takeaways AI\/ML and agentic tools are getting better at helping design and compile FPGAs, but downstream programming&hellip;\n","protected":false},"author":2,"featured_media":302939,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[166081,7606,150854,166082,93763,166083,166084,166085,166086,163757,166087,166088,166089,51611,111,139,69,166090,93759,93762,145],"class_list":{"0":"post-302938","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-altera","9":"tag-amd","10":"tag-arteris","11":"tag-baya-systems","12":"tag-cadence","13":"tag-chipagents","14":"tag-compilers","15":"tag-dsps","16":"tag-efpgas","17":"tag-fpga","18":"tag-fpga-compilers","19":"tag-fraunhofer-iis-eas","20":"tag-infineon","21":"tag-mentor","22":"tag-new-zealand","23":"tag-newzealand","24":"tag-nz","25":"tag-programmable-logic","26":"tag-siemens-eda","27":"tag-synopsys","28":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/302938","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=302938"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/302938\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/302939"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=302938"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=302938"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=302938"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}