For the public to fully accept driverless vehicles, they will need to be one hundred times safer than ordinary cars. That’s according to a recent report from the UK Government’s Centre for Data Ethics and Innovation (CDEI), which provides expert advice to inform the policy realm of autonomous driving. Is scepticism toward autonomous solutions unique to cars? Here, Zohar Kantor, VP of sales at self-guided visual inspection software company Lean.AI, explains why quality managers don’t want fully autonomous machine vision.
The policy paper, Responsible Innovation in Self-Driving Vehicles, reveals that we judge driverless cars by much higher standards than we would apply to a human driver. According to the report, “average improvements in road safety, even if they can be clearly demonstrated, will not engender public trust if crashes are seen as the fault of faceless technology companies or lax regulation, rather than fallible human drivers.”
In the field of quality inspection, there is not the same level of fear, but there are some interesting similarities in people’s willingness to trust technologies claiming to be autonomous. Ultimately, most quality managers would prefer to retain a degree of control, rather than fully automate the entire process.
The boundaries of automation
Autonomous quality inspection solutions that are premised on the idea that the human user might make a mistake promise the highest level of automation. However, surely it is better to utilise the user. After all, they are the expert on the product, so surely they should be providing feedback to the model and helping it learn more?
In my experience of dealing with quality managers and engineers, they often want to know or see how the black box makes its decision. They would be frustrated that I could not fully answer the question. When I would point out that they would not ask a human inspector how they arrived at their decision, despite the fact that human inspectors are statistically significantly more likely to make mistakes, they would protest that I was missing the point. Much like the sceptics in the driving example above, technology replacing human intuition with artificial intelligence must pass a higher barrier of reliability before it is accepted.
However, even if this lack of trust could be gradually overcome, it is questionable whether quality managers really want or need a fully autonomous system. It is more likely that they would prefer we automate approximately 80 per cent of the process, taking the pain and hassle out of building the model by using artificial intelligence, but allowing them to retain control of the remaining 20 per cent through guided learning.
At Lean.AI, we recognise there is no escaping the necessity of huge amounts of images, particularly for complex use cases. The question is, how do we make the process of building the model a better experience for the user? Right now, it typically takes two months of creating data sets and tagging the data, just for one camera and one product. It is in this area that AI can make a real difference, by making the journey much quicker and far more bearable for the user.
Quality managers don’t want fully autonomous visual inspection. That may be partly due to a natural distrust of non-human decision-makers, but it also a reflection of a genuine need to retain a degree of human input in the process. If we can significantly reduce the time it takes to build the model that runs a self-guided machine vision system, investing in AI becomes a no-brainer for quality managers. However, although AI may be in the driving seat, it is the user who steers the vehicle, by providing feedback to the system. By retaining that control, you get the best of both worlds; automating the process, while allowing the user to provide guidance where needed.
Manufacturing & Engineering Magazine | The Home of Manufacturing Industry News