Why are humans and artificial intelligence increasingly similar?

Why are humans and artificial intelligence increasingly similar?

tenco 2019-01-26

Algorithms tell us how to think, and that's changing us.

As computers learn how to imitate, are we starting to look more and more like them?

Silicon valley is increasingly predicting how people will respond to emails, how they will react to someone's Instagram photo, and more and more deciding what government services people are eligible for. Soon, the upcoming Google voice Assistant Google Assistant will be able to call people in real time to book hairdressers.

From hospitals to schools to courts, we've taken algorithms almost everywhere.

We are surrounded by automation systems.

A few lines of code tell us what media to watch, whom to date, and even who the justice system should put in jail.

Is it right that we give so much decision-making and control to these programs?

We are fascinated by mathematical programs because they provide quick and accurate answers to a complex set of questions.

Machine learning systems have been applied in almost every field of our modern society.

How do algorithms affect our daily lives?

In a changing world, machines are learning quickly and brilliantly about the way humans behave, what we like and dislike, and what's best for us.

We now live in a space dominated by predictive technology.

By analyzing massive amounts of data and providing us with immediate and relevant results, algorithms have dramatically changed our lives.

Over the years, we've enabled companies to collect a lot of data, and we've enabled companies to advise us, to decide what's best for us.

Companies like Alphabet or amazon, the parent company of Google, have been indoctrinate their algorithms with the data they collect from us, and instruct ai to use the information it collects to adapt to our needs and become more like us.

However, as we become accustomed to these convenient functions, will we speak and act more like a computer?

"The algorithms themselves are not fair because the people who build the models define success."

-- Cathy O 'neil, data scientist

At the current rate of technological development, it is impossible not to imagine that in the near future our behavior will become algorithmically guided or dominated.

In fact, this is already happening.

In October, Google launched a Smart Reply feature for its Gmail service, called Smart Reply, intended to help you write or respond quickly.

Since then, the assistant feature has taken the Internet by storm, with many criticizing it for making its tailored Suggestions invasive, making people look like machines, and even suggesting that its responses could eventually affect how we communicate and even change the rules of email.

The main problem with algorithms is that, as they get so big and complex, they start to negatively impact our current society and threaten democracy.

As machine learning systems become more common in many areas of society, will algorithms take over the world and our minds?

Now, let's take a look at what Facebook is doing.

Back in 2015, their new version of the News Feed stream was designed to sift through users' subscriptions to make it a personalized newspaper, allowing users to see what was relevant to content they had previously Shared, thumb up, and reviewed.

The problem with "personalised" algorithms is that they put users in "filter bubbles" or "echo Chambers".

In real life, most people are less likely to accept ideas they find confusing, annoying, incorrect, or hateful.

In the case of Facebook's algorithms, the company gives users what they want to see, so that each user's stream becomes a unique world, a unique reality in itself.

The filter bubbles make it increasingly difficult to demonstrate publicly that information and disinformation look exactly the same from a system perspective.

Just like Roger federer?

As Roger McNamee recently wrote in Time magazine, "on Facebook, facts are not absolute;

"" they are a choice, an option initially left to users and their friends, but then amplified by algorithms to facilitate communication and user interaction." "

Filter bubbles create the illusion that everyone believes we are doing the same thing, or having the same habits.

We already know that on Facebook, algorithms exacerbate the problem by exacerbating polarization, ultimately undermining democracy.

There is evidence that algorithms may have influenced the outcome of the UK referendum or the 2016 us presidential election.

"Facebook's algorithms promote extreme information over neutral information, allowing disinformation to override real information and conspiracy theories to override facts."

-- Roger McNamee, silicon valley investor

In today's world of information, sifting through it is a huge challenge for some people.

If used properly, ai could enhance people's online experiences or help them quickly cope with the growing weight of content and information.

To function properly, however, algorithms need accurate data about what is happening in the real world.

Companies and governments need to ensure that the algorithms' data are unbiased and accurate.

Since nothing is perfect, naturally biased estimates of data are already present in many algorithms, posing dangers not only to our online world but also to the real world.

Stronger regulatory frameworks are necessary so that we do not fall into a technological wilderness.

We should also be very careful about what we give the algorithm.

There is growing concern about the transparency of algorithms, the ethical implications of their decisions and processes, and the social consequences that affect people's working lives.

For example, the use of artificial intelligence in court could increase bias and lead to discrimination against minorities because it takes into account "risk" factors such as people's communities and associations with crime.

These algorithms can make catastrophic systemic mistakes and send innocent people to prison in the real world.

"Are we in danger of losing our humanity?

"If we make computers think for us and the underlying input is bad, they will think bad and we may never notice," wrote security expert Bruce Schneier in his book "Click Here to Kill Everybody."

Hannah Fry, a mathematician at university college London, led us into a world where computers could operate freely.

In her new book, "Hello World: Being Human In the Age of Algorithms," she argues that we, as citizens, should focus more on the people behind the keyboards, the Algorithms that write them.

"We don't have to create a world where machines tell us what to do or how to think, even though we're likely to end up in one."

She said.

Throughout the book, she repeatedly asks, "are we in danger of losing our humanity?"

Right now, we're not at the point where humans are excluded.

Our role in the world has not been marginalized and will not be for a long time.

Humans and machines can work together by combining their strengths and weaknesses.

Machines with defects will make the same mistakes as we do.

We should pay attention to how much information we give up and how much power we give up. After all, algorithms are now an inherent part of human beings and they won't disappear anytime soon.