In AI we Trust ... or not?

It is a fact. Machines with common sense that actually understand what’s going on, just like we humans do, are far more likely to be reliable, and produce sensible results, than those that rely on statistics alone. But let’s be clear: these systems don’t exist yet and there are a few other ingredients we will need to think through first together.

Trustworthy AI has to start with good engineering and business practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other fields. Do we currently have design procedures for making guarantees that given AI systems work within a certain tolerance, the way an auto part or airplane manufacturer would be required to do? No.

The assumption in AI has generally been that if it works accurate enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. But this time of over now.

The European Commission has put guidelines in place for trustworthy AI. 7 amendments have been defined. Do you know them? Have you acted upon them as a business or as an engineer?

This session will take you to the current state of the art information on the requirements for trustworthy AI. It will highlight the current pain points and deliver insights on potential solutions in order to make sure that you can take up your accountability as business owner and developer now and in the future.