AI processing for IoT: where clouds give way to a smart edge

Mouser Electronics (Hong Kong) Ltd

By Paul Golata, Mouser Electronics
Friday, 01 February, 2019


AI processing for IoT: where clouds give way to a smart edge

This article examines how AI processing capabilities are moving from the cloud into the smart edge and are providing game-changing, intelligent ways to enable tomorrow’s IoT.

Technological breakthroughs in the area of semiconductor processors have enabled a host of artificial intelligence (AI) capabilities within the cloud computing domain. AI brings intelligence to devices to allow them to behave in more reliable or performative ways.

Because it is in the electronic domain, theoretically AI is not bound by human biological limitations regarding capacity or response times. The cloud provides location and necessary processing power to handle large amounts of data and output innovative solutions that were previously unobtainable. Processors, accelerators, graphics processing units (GPUs), and field-programmable gate arrays (FPGAs) are semiconductor products that provide the necessary computational-processing power to the cloud.

For tomorrow’s applications, it is no longer sufficient to keep AI only in the cloud. The Internet of Things (IoT) requires that more and more of the processing work happens closer to the end nodes — the location of data (sensors) and actuation (control).

The ability to collect data, store it, and then perform the analysis is providing a myriad of new ways to do business. The demand for low latency, real-time decision-making and response, which is imperceptible to humans (<0.05 s), demands that AI processing moves from the cloud to a ‘smart’ edge to transform our present business processes and products to those that the market wants tomorrow.

The edge exists in several contexts of which the IoT layer is one application. The edge is intermediate between the end-node device deployment and the top-level cloud-computing system. As an intermediate layer, it is now being provisioned with real-time AI capabilities, enabling applications that do not require computational power or cannot sustain the latency of cloud computing.

The edge is becoming smart because more and more it is incorporating intelligence in the form of edge computing AI. Edge computing AI may experience use in various locations and stages between the cloud computing layer and the end nodes layer including the location of gateways, access points, and metro edges.

Inferencing at the edge

Motion pictures provide the illusion of reality because multiple frames pass before the eye faster than the eye responds, creating the illusion of ongoing motion and no discontinuity. IoT applications, particularly those directly interacting with humans, require the same kind of ongoing continuity to avoid wait periods. Humans want the devices that they electronically interface with to work with them in real time.

To enable future IoT applications, including autonomous vehicles, real-time decision-making is required. Data at the edge moves to the cloud, and what comes back is features or responses to that data. In that small moment of time, the decision of whether a particular object in front of a travelling vehicle is a blowing piece of litter, a ball, another vehicle or a person may arrive back at the vehicle too late. Voice-enabled applications now found in many homes respond within the response time expected of a normal human conversation. Detecting errors and defects within the industrial contexts may help prevent accidents and hazard occurrences. These specific applications are only the tip of the iceberg among numerous IoT applications coming along in the future that will call for immediate, real-time decision-making.

One method to make this happen is to enable inferencing at the smart edge layer instead of at the higher-level cloud computing layer. Inferencing means to draw or reach a conclusion based on the evidence by way of reasoning. AI at the cloud-computing level can process large amounts of data over time and draw conclusions from the patterns it discerns. It is possible to conceive of ways of pushing these categorised conclusions down into the smart edge layer, so that upon initialisation, the smart edge has a wide variety of highly intelligent categories from which to start its computational processes. The smart edge effectively provides a shortcut, greatly expanding the capability for the smart edge to decide something faster than a human’s biological response time.

To enable this to happen, the smart edge must be able to process fresh data against these categories and quickly perform inferences towards a correct conclusion. Smart edge inferencing happens through machine learning, an application approach utilising AI. Machine learning (ML) is a process or set of rules that occur through calculations or other problem-solving operations to empower the extraction of structured categories and representations from received input data.

Processors, accelerators, GPUs and FPGAs at the smart edge can be programmed to employ algorithms that draw these inferences. The cloud layer supports ML by ensuring that the smart edge always contains the most appropriate algorithms, based upon the computational power it exerts upon the vast raw data it studies and processes to gain experience.

The cloud layer is the best layer level to train ML algorithms. The ML algorithm training from the cloud layer is the informational knowledge that passes on to the smart edge to employ. Trained ML algorithms at the smart edge enable the smart edge to process the real-time data it receives in a structured manner and to compare this data to data models: this process is a recent development called federated learning that is an alternative to centralised training.

Training ML algorithms at the cloud layer requires the highest performance processors, such as GPUs. However, these higher performance processors may not necessarily be a requirement at the smart edge. The reason is that the highly intensive computational processes occur at the cloud layer before moving to the smart edge. The question that engineers must answer is what level of computational quality metrics are necessary for learning. Also, determining how quickly the model needs updates with new data provides input regarding where processors should reside.

This deductive methodology allows for the proper optimisation of a necessary performance level relative to power efficiency for action at the edge, thereby enabling a greater variety of intelligent applications to perform without the necessity of the most costly, power-consuming processing chips (Figure 1).

Figure 1: AI and ML depend on high-performance semiconductor processors. Source: Mouser.

One common example of an AI and ML performance at the smart edge of IoT is in image and video analysis. In this example, AI by way of ML processes a large amount of raw data and extracts usable data content to assist with future decision-making. When data arrives by way of new observations, an ML model processes this data to produce a classification or decision (Figure 2).

Figure 2: AI and ML in the smart edge can analyse videos and images to make decisions. Source: Mouser.

Processing chip architecture

The smart edge generates, handles and utilises large amounts of data. Due to this large amount of data, high-computing processors with excellent efficiency are desirable. Arm technologies enable the world’s most popular AI platform — the smartphone — along with ML features like predictive text, speech recognition and computational photography.

Within Arm’s line of high-performance, 64-bit (with a full 32-bit compatibility) Armv8-A processors are several devices such as the multicore Arm Cortex-A53 and Arm Cortex-A72. World-leading semiconductor firms may employ these Arm cores coupled with vector-processing engines — in the form of a system on chip (SoC) — to support AI processing at the edge.

An important aspect of SoCs is that they frequently integrate with things commonly found outside of the processor (such as accelerators for applications, buses and interfaces, etc). By this extended integration, SoCs provide flexibility and scalability, allowing designers to match their connected end node devices with the data, security and performance characteristics that are essential to their IoT systems and applications at the smart edge.

Processing chip architectures in the smart edge want to take advantage of high, next-generation speeds (≥100 Gbps) with excellent packet-processing abilities. Standard- and open-programming models that employ software-aware architecture frameworks make it easier for designers to configure the product to match their specific network requirements.

By providing a core-agnostic architecture, designers can select the optimum core for their particular application. This concept takes advantage of the multicore trend and extends it by allowing an increased performance through the incorporation of either related cores or diversified cores.

This smart edge computing generally contains provisions that support cybersecurity requirements that are inherent in IoT applications throughout their entire lifecycle. They also support virtualisation — the abstraction of computing resources — permitting lower costs and complexities to prevail in designs. Virtualisation achieves these benefits by treating computational and storage resources as separate entities, then redirecting and managing them for optimal utilisation (Figure 3).

Figure 3: High-performance processors work with AI, electronics and the IoT to enable the smart edge. Source: Mouser.

Another processing chip architecture is the x86 complex instruction set computer (CISC) microprocessor. The x86 architecture has been around for about two decades. Similar in architecture to central processing units (CPUs) found in desktop computers, they incorporate a more advanced feature set to meet the requirements of workstations and networks, making them suitable for smart edge applications. They work in smart edge applications to provide computational processing power, enhanced connectivity and storage on a high-speed product that does not compromise on keeping every piece of data secure.

The x86 architecture allows for an intelligent workload placement, a low latency, scalability and extreme responsiveness. These processors contribute to optimising performance, providing analytics and offering accelerated data compressions. These microprocessors come in several performance levels, with the highest level of performance being suitable for demanding smart edge applications including real-time analytics, AI and ML. They perform well when provisioned to give real-time AI-inferencing results at the smart edge. They come with up to 28 CPU cores and can support up to 12 TB of address space.

Conclusion

The smart edge is emerging from the clouds. AI processing is enabling the smart edge to do a variety of tasks, which were previously relegated to the cloud computing layer. The demand for flexible, low-latency, real-time IoT solutions is part of the ongoing requirements that are now getting addressed at the smart edge. The ability to constantly adapt and provide data collections and analysis at the edge will allow AI and ML to provide better drive, business transformations and performance.

By managing information from a series of end node devices and applying ML algorithms on data business assets, IoT applications will be connected in a real-time, optimised way to decrease costs and increase value deliverables to the customer. The future of IoT includes soaring into the clouds only when necessary, for much of what we will accomplish will get performed at the smart edge.

Related Articles

Hidden semiconductor activity spotted by researchers

Researchers have discovered that the material that a semiconductor chip device is built on,...

3D reflectors help boost data rate in wireless communications

Cornell researchers have developed a semiconductor chip that will enable smaller devices to...

Scientists revolutionise wireless communication with 3D processors

Scientists have developed a method for using semiconductor technology to manufacture processors...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd