AI may still be in its infancy. But there’s no question more people are beginning to feel its influence in their everyday lives.

Tolga Kurtoglu, CEO of PARC, and award-winning tech journalist Kara Swisher sat down to discuss the current state of AI and how ethics can steer its future.

Here are the key takeaways from their discussion:

 

More people are recognizing the potential of AI

AI algorithms are now able to perform breast cancer screenings better than some medical professionals. They’re fixing the industrial base in manufacturing. Enabling better mobility in smarter cities.

It’s become clear that AI isn’t just another passing tech fad. And people are starting to notice. Individuals are beginning to wonder what they need to do to stay relevant in a world dominated by AI. And corporations are talking about what their AI and machine learning strategy should be.

 

Nearly anything can be digitised

While a lot of what’s digitized today is more repetitive in nature and takes place in structured environments, algorithms that can perform more dynamic, creative tasks are in the works. Some AI algorithms have even been developed that can compose music.

Today, programmers don’t even have to proscribe what algorithms should execute. All they need to do is feed it data and the algorithm will come to conclusions on its own. With enough computational data, there could be no limits to what can be digitised.

 

The AI “black box” is a growing problem

The next frontier in AI is finding a way to make it more transparent. Historically, AI was only used for objective problems with one answer such as stress analysis on a bridge or calculating aerodynamics on a plane wing.

But today, AI is being asked to take on messier human problems. Performing evaluations for teachers. Medical diagnostics. Sentencing, parole and probation in criminal justice. Filtering job applications in human resources.

Because algorithms grow intelligence on their own, there’s no clear line-of-sight to how their decision-making process works. And without a way to analyze these decisions, there can be a risk of bias.

 

The push for change needs to come from outside

The online economy requires a deep understanding of how people think. It’s through analysis of human behavioral patterns that companies like Facebook are able to optimise for attention, engagement, virality and speed.

As long as these business models remain what they are, there’s very little incentive for tech companies to change. The makeup of these companies also tends to be very homogenous.

And algorithm developers aren’t always inclined to think about some of the social implications of what they’re creating.

That’s why outside intervention is essential. Policy makers and stakeholders from social sciences and corporate governance need to come to the table and be part of the discussion. And while some are pushing for privacy such as California and the EU, there’s still a lot of work to be done.