Whenever someone mentions AI, mental images are immediately conjured of movie robots taking over the world and fostering their own sinister agenda to the detriment of humankind.  In the wake of issues like the Facebook data scandal, it is no wonder that a common theme among tech giants in 2018 regarding AI is the importance of ethics.  Tech companies are going above and beyond to present themselves as trustworthy, responsible, and transparent.  Some have likened the development and maturation of AI to raising a child, but how does one teach a machine right from wrong?

Artificial intelligence is a machine or program’s ability to learn and think for itself, developing skills and knowledge data as it gains experience.  Its purpose is to enhance every day human activities.  Most of us are already utilizing AI and don’t even know it.  Apple’s Siri, The Tesla smart car, Gmail’s Smart Reply, and Netflix’s predictive technologies are just a few examples of AI already in use. Yet who determines the moral compass of that AI technology?

Maintaining the privacy of the information harvested by AI programs is essential to consumer trust.  Consumers need to believe that the AI program or machine is operating in the user’s best interests and making sound moral judgments on the user’s behalf, otherwise, the product will fail.  As the AI’s database expands, so does the risks of cyber-hacking and security breaches.  AI developers have an obligation to ensure the AI program or machine is protecting the privacy of not just the user but also the privacy of others in the database. IBM Watson CTO Rob High states that trust “comes down to can we identify what sources of information are being used? Have we established the right properties, the right principles in place when we train these systems to use data that is representative of who we are, and the information that we’re using.”

Responsibility for AI decisions falls not just in the end user’s lap, but also the developers and system manufacturers.  A perfect example of this is Audi’s smart car.  Audi has announced that the company will assume liability for accidents involving its 2019 A8 model when its “Traffic Jam Pilot” automated system is in use.  Smart car AI developers need to incorporate decision-making skills for their autonomous automobiles that choose vehicular damage over human harm as well to be ethically responsible.

“Fundamentally, machines need to be designed to work with humans,” says Michael Biltz, managing director of Accenture Technology Vision at consulting firm Accenture. “AI should put people at the center, augmenting the workforce by applying the capabilities of machines so people can focus on higher-value analysis, decision-making, and innovation.”


Leave a Reply

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

Thanks for signing up for our newsletter!