Let’s think Intelligently about AI.

Posted on : 17-01-2017 | By : richard.gale | In : Uncategorized

Tags: , , , ,

0

Currently there is a daily avalanche of artificial intelligence (AI) related news clogging the internet. Almost every new product, service or feature has an AI, ‘Machine Learning’ or ‘Robo something’  angle to it. So what is so great about AI? What is different about it and how can it improve the way we live and work? We think there has been an over emphasis on ‘machine learning’ relying on crunching huge amounts of information via a set of algorithms. The actual ‘intelligence’ part has been overlooked, the unsupervised way humans learn through observation and modifying our behaviour based on changes to our actions is missing. Most ‘AI’ tools today work well but have a very narrow range of abilities and have no ability to really think creatively and as wide ranging as a human (or animal) brain.

Origins

Artificial Intelligence as a concept has been around for hundreds of years. That human thought, learning, reasoning and creativity could be replicated in some form of machine. AI as an academic practice really grew out of the early computing concepts of Alan Turing and the first AI research lab was created in Dartmouth college in 1956. The objective seemed simple, create a machine as intelligent as a human being. The original team quickly found they had grossly underestimated the complexity of the task and progress in AI moved gradually forward over the next 50 years.

Although there are a number of approaches to AI, all generally rely on learning, processing information about the environment, how it changes, the  frequency and type of inputs experienced. This can result in a huge amount of data to be absorbed. The combination of the availability of vast amounts of computing power & storage with massive amounts of information (from computer searches and interaction) has enabled AI, sometimes known as machine learning to gather pace. There are three main types of learning in AI;

  • Reinforcement learning — This is focused on the problem of how an AI tool ought to act in order to maximise the chance of solving a problem. In a particular situation, the machine picks an action or a sequence of actions, and progresses. This is frequently used when teaching machines to play and win chess games. One issue is that in its purest form, reinforcement learning requires an extremely large number of repetitions to achieve a level of success.
  • Supervised learning —  The programme is told what the correct answer is for a particular input: here is the image of a kettle the correct answer is “kettle.” It is called supervised learning because the process of an algorithm learning from the labelled training data-set is similar to showing a picture book to a young child. The adult knows the correct answer and the child makes predictions based on previous examples. This is the most common technique for training neural networks and other machine learning architectures. An example might be: Given the descriptions of a large number of houses in your town together with their prices, try to predict the selling price of your own home.
  • Unsupervised learning / predictive learning — Much of what humans and animals learn, they learn it in the first hours, days, months, and years of their lives in an unsupervised manner: we learn how the world works by observing it and seeing the result of our actions. No one is here to tell us the name and function of every object we perceive. We learn very basic concepts, like the fact that the world is three-dimensional, that objects don’t disappear spontaneously, that objects that are not supported fall. We do not know how to do this with machines at the moment, at least not at the level that humans and animals can. Our lack of AI technique for unsupervised or predictive learning is one of the factors that limits the progress of AI at the moment.

How useful is AI?

We are constantly interacting with AI. There are a multitude of programmes, working, helping and predicting  your next move (or at least trying to). Working out the best route is an obvious one where Google uses feedback from thousands of other live and historic journeys to route you the most efficient way to work. It then updates its algorithms based on the results from yours. Ad choices, ‘people also liked/went on to buy’ all assist in some ways to make our lives ‘easier’. The way you spend money is predictable so any unusual behaviour can result in a call from your bank to check a transaction. Weather forecasting uses machine learning (and an enormous amount of processing power combined with historic data) to provide improving short and medium term forecasts.

One of the limitations with current reinforcement and supervised models of learning is that, although we can build a highly intelligent device it has very limited focus. The chess computer ‘Deep Blue’ could beat Grand-master human chess players but, unlike them, it cannot drive a car, open a window or describe the beauty of a painting.

What’s next?

So could a machine ever duplicate or move beyond the capabilities of a human brain. The short answer is ‘of course’. Another short answer is ‘never’… Computers and programmes are getting more powerful, sophisticated and consistent each year. The amount of digital data is doubling on a yearly basis and the reach of devices is expanding at extreme pace. What does that mean for us? Who knows is the honest answer. AI and intelligent machines will become a part of all our daily life but the creativity of humans should ensure we partner and use them to enrich and improve our lives and environment.

Deep Learning‘ is the latest buzz term in AI. Some researchers explain this as ‘working just like the brain’ a better explanation from Jan LeCun (Head of AI at Facebook) is ‘machines that learn to represent the world’. So more general purpose machine learning tools rather than highly specialised single purpose ones. We see this as the next likely direction for AI in the same way, perhaps, that the general purpose Personal Computer (PC) transformed computing from dedicated single purpose to multi-purpose business tools.

 

Is a robot also in line for your next interview?

Posted on : 26-02-2016 | By : Maria Motyka | In : Innovation, Uncategorized

Tags: , , , , , , ,

0

The consignment to history of what were key jobs at the time is, of course, a natural consequence of technological advancement (see our previous article on the future resource market). Replaced by ‘new’ tech of the time, everything from switchboard and elevator operators to “ice cutters” have their place in the list of professions which have long since left our daily job boards.

Nevertheless, over the past few years there has been an increased amount of coverage given to the consequences of new tech and the 4th Industrial Revolution (including by leaders at last month’s World Economic Forum), which is said to lead to jobs currently held by men and women becoming filled by machines in pretty much every sector and industry in the global economy.

Thomas Frey, Senior Futurist at the DaVinci Institute, and Google’s top rated Futurist Speaker, predicts that by 2030 a whopping 2 billion jobs will no longer exist (to put that in context… around half of all the jobs on the planet). Does this mean that we have a 50 per cent chance of becoming jobless within the next few decades, because of automation and other new technologies, such as robots being introduced?

robot

Worry not!…apparently the answer is no.

According to Frey, what it means is that our jobs are transitioning, and it is happening “at a higher pace than ever before in history”. The futurist stresses that due to their catalytic nature, several innovations, including driver-less cars, teacher-less education and 3D-printable houses, are actually going to create completely new industries. This view is supported by a recent report, Fast Forward 2030: The Future of Work and the Workplace, which states that;

“Losing occupations does not necessarily mean losing jobs – just changing what people do”, and by Principal Researcher at Microsoft Research, Jonathan Grudin, who said that “Technology will continue to disrupt jobs, but more jobs seem likely to be created”

As an example, let’s take 3D printing, which Chris Anderson, Managing Editor of Wired Magazine believes to be even bigger than the Internet. Frey predicts, that as 3D printing matures, professions such as clothing manufacturing and retailing, as well as lumber, rock, drywall, shingle and concrete industries are going to disappear. However, new jobs will become available in the areas of 3D printer design, engineering and manufacturing (although, in one scenario a 3D printer can print a baby 3D printer); there will be a demand for 3D printer repairmen, product designers, stylists, engineers and ‘ink’ sellers.

While predicting that even though robots will fill some jobs, others will benefit from this productivity growth and subsequently will have more income and more disposable income. This in turn will increase the need for other jobs. Heidi Shierholz, Chief Economist at the U.S. Labor Department, implies that the pace of change might at times be exaggerated. During the Will your Job Disappear by 2024? Bloomberg Benchmark podcast she stated that actually we are not seeing a massive acceleration in productivity, which would signal that robots and automation have some way to go in removing the levels of workforce that some are predicting. Indeed, while historically productivity has grown around 2 per cent a year, over the last 10 years it has actually been a little bit slower.

Are we being over dramatic about the speed of the changes leading to an increased man vs machine conflict in the workplace? All we can say for certain is that whilst the more extreme scenarios are increasingly likely to make headlines and reach your feeds, it is certain that sooner or later technology will change your job and those of the next generation.