Artificial Intelligence (AI) is a buzzword and a world that is almost bewildering now. Medium-sized companies in particular face the challenge of finding the right solutions at the right time or creating resources and developing knowledge.
But the challenge starts early: From product management to research and development, everyone must know how the benefit-oriented innovation process using AI works and what this means in the short to long-term product cycle.
Embedded AI refers to electronic systems in which artificial intelligence operates autonomously and locally. The market potential is huge – in part due to follower trends such as (I) IoT, corresponding connectivity, security, and cloud services. Allied Analytics estimates that the AI semiconductor market will be worth more than $190 billion in 2030. For comparison, the growth of the AI-as-a-service (cloud) market is estimated to be around $44 billion over the same period.
A market with great growth potential
Practically speaking, embedded AI can be divided into three main application groups: functional innovations, user interaction, and predictive/preventive maintenance. The first enables new functionality that improves or even changes the intended use of a product or process. User interaction is outsourced as an additional output field. This ranges from simple voice command entry (such as KWS, keyword detection) to gesture recognition to more complex human-machine collaborations such as operator tracking, eye tracking, or work piece detection. Typical maintenance topics such as predictive maintenance or preventive maintenance, which go beyond simple condition monitoring and provide early, intelligent predictions about specific fault patterns, are currently considered probably the greatest “hidden needs” of many product manufacturers.
“Most companies often do not know what capabilities are in their products,” explains Vyacheslav Gromov, general manager of embedded artificial intelligence provider AITAD.
Cloud AI vs. Embedded AI and Edge AI
Cloud AI alone is only a transitional process, and Gromov is sure the future lies in decentralized processing: “We are working on the sensor on the circuit board with huge amounts of data that we couldn’t transfer any further. AI has to process it and dispose of it directly in the location in order to track down desired deep connections.”
Built-in AI makes it possible to process large amounts of data locally, reducing the risk of sensitive data being intercepted or tampered with. This leads to higher data and system security. The device does not have to provide a high-performance network infrastructure to be able to process data. Thus, it requires less contact, which reduces production costs. Built-in AI lives on limited resources in terms of power supply (including battery operation), computing power, and storage. These components collect and process data instantly and can interact with it in milliseconds, which is a must in many applications. The device can also analyze data in real time and transmit only what is relevant for further analysis in the cloud (keyword: reduce data volumes).
Gromov: “Embedded AI – to be distinguished from advanced AI – is a game-changer and cutting-edge technology that creates industry-wide USPs for first-time users. Companies need to rethink product design. Data-driven development requires a long-term perspective, with an organization equipped for improvements and updates as well as practical tests. With these AI systems, validation of the system efficiently can only be done through proof of concepts.”
Companies must rely on individual solutions
The embedded AI market is still largely unoccupied, with more and more isolated solutions or low-threshold offerings being added. “We clearly recommend that companies take an approach away from isolated, off-the-shelf solutions. They can only be adapted to needs to a limited extent, with smaller or larger cuts. Individual system productions have a much greater scope. This means knowing which AI model fits the product, and how it can be implemented. effectively on the hardware, developing the corresponding system components based on the data collected and evaluated, implementing everything with a prototype and testing it in practice. This seems like a lot of effort at first. But if you look at how long the product has been on the market and the advantages it has By companies and users of it, for example in the area of preventive/predictive maintenance, the investment is definitely worth it,” Gromov continues.
A similar picture appears every day in product development, says Vyacheslav Gromov: “Awareness, resources and skills are one thing. In addition, experience has shown that there are some obstacles: most medium-sized companies actually fail because of data types. Most of the time, data is collected without Careless – with the wrong durations, sampling rates, or accuracy, or with the wrong sensors or in the wrong place.However, it is the specific requirements outside the standard process that determine the end of your development.For example, if companies attempt to implement a model Machine learning using DNNs (the so-called delayed neural networks) with the usual frameworks, it quickly reaches its limits. This is only possible with your own experience and semi-automatic and specially adapted tools.”