Monday 27 July 2015

ARTificial intelligence --- and Da Vinci can't save us this time. [Part One]

ENIAC, the world's first Turing-complete computer introduced in 1946 was in itself a work of art. As decades went by technology advanced rapidly. The faster new technology was introduced, the quicker newer technologies that could replace the former were coming into play. This is what experts on this topic like to call "Law of Accelerating Returns."

To better get your head to wrap around this phrase, a brilliant analogy was put forth by Tim Urban, a writer for Wait But Why. It goes something like this:

Imagine taking a time machine back to 175 - a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldn’t be surprising or shocking or even mind-blowing - those words aren’t big enough. He might actually die.

But here’s the interesting thing; if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750 — transportation, communication, etc.— definitely wouldn’t make him die.
No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.



So, advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right? The first ASI, Artificial Super Intelligence could be lurking right round the corner, ready to pounce upon mankind with a baseball bat in hand.

But I think i'm getting ahead of myself. First let me briefly discuss what an AI is. Frankly, if someone asked that to me two hours ago, I would've been like "uh y'know those cool transformer-like robots we see in movies. Umm I don't know man."

So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body, if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Or take the example of Ava- the female bot in Ex Machina- the "wet ware" that doubles as her brain is the AI while her physical form is it's container.

Furthermore, this topic has been studied quite deeply and AI has been categorized into 3 different chunks:

1). Artificial Narrow Intelligence, ANI: In layman's terms this is the weak AI. It far excels what a human can achieve but only in one particular field. For example, there's an AI that can beat the world champion at Black Jack, but ask it to calculate your tax returns and it will just blankly stare at you.


2). Artificial General Intelligence, AGI: This is strong AI or human-level AI. This is a machine that is equally as competent as a human to carry out any intellectual task that a human can. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” This describes all the basic functions of the human brain's frontal lobe (something any A level biology student would know. ) AGIs don't exist and some crazy scientist has yet to create it.


3). Artificial Super Intelligence, ASI: Oxford philosopher and leading AI thinker Nick Bostrom defines super intelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” ASI is the reason the topic of AI is so controversial and why AI is often held in semblance with the terms extinction and immortality


We use AI all the time in our daily lives, but we often don’t realize it’s AI.  John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”

Planet Earth functions around ANI. Your phone is a little ANI factory. When you navigate using your map app, receive tailored music recommendations from Youtube, check tomorrow’s weather, set up an alarm or dozens of other everyday activities, you’re using ANI. Google Translate is another classic ANI system; impressively good at one narrow task. Voice recognition is another, and there are a bunch of apps that use those two ANIs as a tag team, allowing you to speak a sentence in one language and have the phone spit out the same sentence in another.



[Untimely end of Part One]





This post in nothing more than a long-winded text book definition of AI. It wasn't supposed to turn out this way; my intention was to churn out my thoughts and opinions on AI, which mind you, I am very opinionated about.  But I needed to get some jargon cleared up first before I could go on a high speed rant. So yes, I just Peter- Jacksoned my post, though I won't be making more money, much less making any money at all.




No comments:

Post a Comment