
Often, low cost FPGAs are highly suitable for this activity. Instead they must perform those computational tasks close to the edge. In some cases, designers cannot afford to perform inferencing in the data centers because of latency, privacy and cost barriers. However, the second phase of machine learning, termed ‘inferencing’, applies the system’s capabilities to new data by identifying patterns and performing tasks. This activity is highly compute-intensive and, therefore, typically conducted in the data center using high-performance hardware.

Systems in training learn a new capability by collecting and analyzing large amounts of existing data. Machine learning typically requires two types of computing workloads, training and inferencing. right beside, or even integrated within, the sensor on the machine itself. While machine vision systems are not new in the industrial environment, their proliferation, the explosion of growth in this sector and the plethora of new applications emerging on a daily basis is due very largely to breakthroughs in AI processing, especially low cost, low power AI inferencing systems that can be applied at the edge – i.e. Production control will find it easier to achieve higher quality levels when they can be 100% sure that the correct components are where they should be, and when predictive maintenance applications are used to proactively detect equipment defects.


The same techniques can also be used to identify a foreign object in a production line. Highly-capable, lower-cost camera systems are now being used to detect the presence of a human in a hazardous zone, or to disable machinery thereby preventing harm to workers.
