AI is only as smart as our influence

MRO Electric and Supply
By Joseph Zulick*
Friday, 29 May, 2020



AI is only as smart as our influence

The old term ‘garbage in equals garbage out’ is true even in the smartest of systems.

If you incorrectly define data points or describe an outcome as being bad when it was good, then the decision tree that artificial intelligence uses starts to fall apart. It is unfair to claim that AI or sensor tracking failed because it didn’t meet our outcome, especially if we supplied the data and the parameters of success.

In a recent ‘Destiny of Manufacturing’ podcast, Danny Schaeffler, President of Engineering Quality Solutions, discussed that many companies are not ready for artificial intelligence because they are not done mastering the systems and data they have now. He felt that many companies have not reached the potential of their current systems.

Schaeffler went on to question, “Are we prepared to compete in the global market? Technology is pulling the planet closer together and we all have to examine our competitiveness.” Are you examining the total cost of ownership? We do not always take into account the price difference when we undertake a new job or task. Have we factored in transport costs — if international, duties and taxes and sometimes tariffs (yes, other countries charge tariffs)? Do we know the implementation costs? All of this information factors into our AI decisions and costs. Our current systems dictate the questions we will be asking for solutions; they are also the data points we will be using to provide those conclusions. In many plants they have stopped using their data collection systems such as tonnage monitors and die protection. This means they lack the historical trends that are used for the AI to draw a conclusion; data is critical to defining a trend and solution.

Is your company doing enough to meet the demands in the workplace? Schaeffler discussed an excellent point that as materials change and new products demand complex materials, it’s not just the manufacturing or production departments that need to be educated. The engineers need to research if the equipment is adequate for the new materials. Your AI is all but starting over if you can’t directly correlate the old material results and the new material expectations. The purchasing department needs to be looking at new material sources; it’s possible the supplier of simple mild steel is not going to be the best source for exotic materials. On the quoting side, if you’re pricing yourself too low, all departments will need to be trained on new technology and materials. The sensors will need new calibration points as these changes occur. The resulting data will need to be compared with old data — and is it useful? Unfortunately, when you take on new materials simulation data may have a standard linear curve of data, but more likely it won’t.

Along with the data that needs to be gathered to produce accurate results, we must also temper our expectations. If we think that a first run part with new tools, cutters, nozzles, etc, along with new materials mixed with limited data, will produce accurate initial data, you are setting yourself up for disappointment.

Some of this is due to an extension of the curse of knowledge — once we know how something works, we can’t unknow this information. Consequently, we set too high of expectations and create unrealistic timelines. While AI and IoT can make life easier and more accurate, it cannot eliminate launch phase challenges until we can develop baseline data. Our influence over AI can be felt in the computer simulations and in closed-loop feedback where the confidence in the data can be misleading because the system has confidence in its calculation.

Let’s take an off-the-floor example. We have a programmable thermostat that turns the heat on when the temperature drops 2°. The data that I monitor is the front doorbell, which trips when the button is pressed. I may assume that the visitor is the cause for the drop in temperature. I have the data which correlates this to be true when in reality it’s the door opening and closing, whether there is a visitor or not. You can have a high degree of confidence because you show a correlation, but without adequate analysis of what the data means you may never reach the root cause. You may also have an inadequate number of sensors or be sensing the wrong thing to generate what is needed in order to model this situation.

The old saying is that everything that is measurable isn’t important and everything that is important isn’t measurable. This is often paraphrased and comes from Einstein’s quote of “Not everything that counts can be counted, and not everything that can be counted counts”. I’m sure this was after a grad student asked about their grades.

We are the greatest influencer in the accuracy of our results. I don’t want to venture into intentionally manipulating data for personal gains, but at a very minimum it has been viewed by systems like Linknet and other data collection systems that the formulas are often edited to produce OEE (overall equipment effectiveness) scores of around 80% when in reality they are more like 60%.

If you obey the true rules of systems OEE, machine availability of 100% starts with a base of 365/24/7. If you choose not to run a third shift, many companies will say they’re 85% efficient running two shifts when in reality this would be more like a maximum of 66%.

This is where data gets tricky, and AI is only as good as that we choose to provide. Many people will say efficiency is a rating of how well you do your process, so two shifts offer a maximum of 100%. The difficulty is that you may never go to fill that pipeline for the third shift if you believe you’re maxed out at 90% OEE.

This is just one example of how data can get muddy if you allow it to become that way.

The other problem we have in AI is that when it comes to data, too often we start with the conclusion and work towards proving our point. This becomes a big problem when you are trying to sell the concept of big data to workers and operators and you have a history of using the data not to bring about improvement, but instead to assign blame.

AI can provide predictive analysis based on current data and trend examination. These trends can provide us with that look over the horizon, clear of the forest and the trees. When two roads diverge in the woods, now you can find the path that takes you to the promised land and the one that leads you off the cliff.

If you provide the right balance so people can achieve the ultimate goal of improvement and innovation, you have to set your goal for AI to provide these solutions. It’s more difficult than ever to compete in this global economy, and in order to do so we need advancement and breakthroughs that will come from looking at our problems in a new way and allowing AI to provide the path.

*Joseph Zulick is a writer and manager at MRO Electric and Supply, offering Siemens and FANUC factory automation parts used by engineers worldwide.

Image credit: ©stock.adobe.com/au/sdecoret

Please follow us and share on Twitter and Facebook. You can also subscribe for FREE to our weekly newsletter and bimonthly magazine.

Related Articles

5 key conclusions from the latest wearables research

IDTechEx has updated and added to its wearable technology market research portfolio, including a...

'Hot qubits' overcome quantum computing hurdle

A new proof of concept is said to promise warmer, cheaper and more robust quantum computing...

Routes to 3D electronics: assessment of technologies and applications

Metallisation of 3D surfaces is a growing field with diverse applications.


  • All content Copyright © 2020 Westwick-Farrow Pty Ltd