Author Archives: artificialintelligencewatch

IBM Watson Makes its Move into Health

IBM is one of the biggest players in AI and cognitive computing research (e.g., natural language processing).

You might recall how IBM’s Watson supercomputer completely wrecked its competition in Jeopardy.

Here is an IBM sponsored video on Watson and its future applications.

As you can imagine, IBM isn’t limiting itself to Jeopardy (which was probably more of a marketing ploy).

In the news recently, IBM reportedly expanded its US federal healthcare practice to include big data analytics to improve clinical care.

Additionally, according to The Economic Times, given the data analysis capabilities of Watson, Watson might some day replace human doctors.

But I don’t think that argument holds much sway — I share Chomsky’s skepticism on the limits of cognitive research as a simulative mimicry of human ingenuity.




Updates on AI Business Trends & Developments

First things first, Inbenta, an AI company in Spain focusing on semantic search, received $2 million in Series A VC funding. One of Inbenta’s taglines on its website is “interactive customer support with AI”. So according to the VC funding article, Inbenta operates on the intersection of AI and customer support. The problem with current search engines is that while it provides a lot of information, a lot of the information is irrelevant. By using AI, Inbenta can help its clients predict what customers want — hence “semantic search”, or relevant searches. Semantic search is presumably a subset of the larger trend of deep learning which reflects Facebook’s, Google’s, and IBM’s recent acquisitions, also noted earlier in this blog


This research helps to remove one of the obstacles of the development of AI — natural language processing (also speech recognition). As Noam Chomsky notes, as jotted down in a previous blog post, it is difficult for computers to capture the nuances of language, because human creativity is rooted or at least closely connected to our human physical state. So, Inbenta with its enterprise semantic search core capability is trying to bridge this gap at least in terms of natural language processing (using contextual clues to predict relevant searches).

According to a recent NYT article, researchers at Stanford are developing a biologically-rooted computer chip to further push the bounds of current AI technology.

This discussion of computers as basically a mimicry of human activity/knowledge reminds of renowned French sociologist Jean Baudrillard in his discussion of Simulacra and Simulation. Is AI aiming to be a simulation of humanity, or will AI replace reality with its own sense of hyper-reality? Not to get too deep into a futurist discussion, but it seems worth considering in terms of future applications of technology, especially in terms of business decision making and neural nets and whatnot.

Another interesting development in AI is an intelligent, elastic cloud, which would increase the ability of firms to dedicate server capacity, thereby presumably increasing speed of information access, gathering, and recovery.

The Definition of Privacy for Millennials and Beyond

So Facebook is in the news again. Guess what it’s about.

Facebook changes its privacy settings again! VentureBeat:

“To have a good public content systems means having people who want to share content publicly,” said [Michael] Nowak [Facebook product manager]. 

We’re not trying to characterize Facebook as an evil, data-hungry behemoth. Rather, we are trying to remind folks that Facebook earns its revenue from your data. It’s not actively encouraging you to keep your data private; it wants to ensure you’re comfortable so you share more — ideally, with the whole, wide world.

Facebook’s privacy page. For giggles, here’s Google’s privacy policy.

Also, here are Facebook’s and Google’s research publication pages. There is some fascinating stuff going on, and it is important to note that AI may be more of a catch-all term, because there might be so much that exceeds the bounds of AI-proper. For example, AI has applications/overlap in computer vision, data management, data mining, data retrieval, machine learning, security, etc.

It’s pretty awesome (not sarcastically) that they’re so open about their research. It could easily be proprietary, confidential, private, for internal use only.

In other words, they’re really open about how they’re planning on using your private information!

Nonetheless, according to, privacy is:


Oh nice, it looks like I’ve looking at a lot of car websites as of late. I’m actually a fan of the Cadillac ATS 3.6L V6 as America’s answer to the higher-profit-margin-premium German big boys.

The new definition of privacy: privacy settings.

The ultimate privacy setting being, of course, going to Barnes & Noble and reading magazines for free while sipping in-store Starbucks with the rest of the moochers; visiting dealerships; dealing with the salesperson’s (usually male) really bad and recycled jokes; not finding the best price/incentives or consumer satisfaction ratings; etc.


Highlights in Smart Artificial Intelligence: Investing; Automated Warehousing and its Discontents; MIT & Noam Chomsky

On March 24, 2014, WSJ reported that Elon Musk (Tesla), Mark Zuckerberg (Facebook), and Ashton Kutcher (That ’70s Show / Dude, Where’s My Car?), along with a bevy of other investors, made a joint $400 million investment in Vicarious FPC, an artificial intelligence company.  WSJ states:

The funding round, the second major infusion of capital for the company in two years, is the latest sign of life in artificial intelligence. Last month, GoogleGOOG -4.58% acquired another AI company called Deep Mind for $400 million. Vicarious has an ambitious goal: Replicating the neocortex, the part of the brain that sees, controls the body, understands language and does math. Translate the neocortex into computer code and “you have a computer that thinks like a person,” says Vicarious co-founder Scott Phoenix. “Except it doesn’t have to eat or sleep.”

It may be decades before companies like Vicarious can create computers with human-like intelligence. But web outfits like Google, YahooYHOO -4.18%, Facebook and others have more immediate uses for artificial intelligence.

You can read the rest of the article for yourself, but it is somewhat speculative as these things are really hush hush (corporate trade secrets and all — the minute you patent/copyright is the minute when people start stealing and IP becomes an international paper tiger).

However, Facebook recently released a research paper on facial recognition, the sourcing of which is presumably DeepFace, or Facebook’s software research project.

In any case, as the article alludes to, it will be decades before there will be viable commercial options for this technology, which may also be referred to as deep learning or neurobusiness (refer to the Inaugural Post, it is still in the innovation trigger on the Gartner Hype Cycle). So, speculating here, but this funding is probably mostly for R&D, as well as possible commercial products.

Gartner’s definition of neurobusinessNeurobusiness is the capability of applying neuroscience insights to improve outcomes in customer and other business decision situations.

CaptureSource: MIT Technology Review

A couple years ago, Amazon made some very large investments in IT and smart infrastructure, in order to lead to cost reductions in operations/logistics expenses.

First, in 2009 Amazon acquired Zappos, an online retailer. But details on the deal are kind of fuzzy.  NYT reports that it went down for $847 million, while Supply Chain Digital reports that it grossed $1.2 billion.  Either way, Zappos was Amazon’s largest acquisition ever.

Also, Zappos utilized Kiva’s then-widely-used robots to fully automate their warehouses.

Then, in March 2012, Amazon acquired Kiva for $775 million, Amazon’s second largest acquisition to date.

On March 31, 2014, Supply Chain Digest reported that Amazon would keep Kiva unavailable to its competitors for at least two years. In other words, Kiva remains for internal use only.


The downside of warehouse automation and the automation of other labor-intensive products and services is a negative flux in human capital demand.

Gartner paints a dystopic future for 2020:

By 2020, the labor reduction effect of digitization will cause social unrest and a quest for new economic models in several mature economies. Near Term Flag: A larger scale version of an “Occupy Wall Street”-type movement will begin by the end of 2014, indicating that social unrest will start to foster political debate.

Digitization is reducing labor content of services and products in an unprecedented way, thus fundamentally changing the way remuneration is allocated across labor and capital. Long term, this makes it impossible for increasingly large groups to participate in the traditional economic system — even at lower prices — leading them to look for alternatives such as a bartering-based (sub)society, urging a return to protectionism or resurrecting initiatives like Occupy Wall Street, but on a much larger scale. Mature economies will suffer most as they don’t have the population growth to increase autonomous demand nor powerful enough labor unions or political parties to (re-)allocate gains in what continues to be a global economy.

NOAM CHOMSKY The Atlantic was brave enough to publish a conversation with the legendary or notorious Noam Chomsky, titled “Noam Chomsky on Where Artificial Intelligence Went Wrong.”

The following is a brief excerpt from the introduction, which basically says that while AI is innovative by sifting through mountains of data, it can’t capture the biologically-rooted creativity (e.g., language) of the human brain.

Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.


In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.

The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?

Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

Inaugural Post

Welcome to Artificial Intelligence Watch, a blog dedicated to spotting major trends and opportunities at the intersection of business and artificial intelligence, robotics, & intelligent automation. For convenience, this blog will heretofore refer to “artificial intelligence, robotics, & intelligent automation” as AI.

Here is a short video of a real-time conversation between two AI machines that’s sure to simultaneously make you laugh and give you the chills. If you listen to their conversation carefully, you can almost see them working out the problem through emotional intelligence (EQ) algorithms:

Is AI a lucrative industry sector?


Some analysts argue that the global market for AI was projected to be shy of $1 billion at the end of 2013, and is expected to grow exponentially to over $37 billion by 2015.

In January 2014, Google bought DeepMind, a London-based AI firm, for $400 million. For more information on possible AI applications for Google, please click here.

So what is AI?

According to Gartner (one of the most influential voices in Information Technology), AI is defined as:

A wide-ranging discipline of computer science that at its core seeks to make computers behave more like humans. The term was coined by John McCarthy of the Massachusetts Institute of Technology in 1956. Artificial intelligence (AI) attempts to resolve problems by “reasoning,” similar to the process used by the human mind. AI involves the capability of a machine to learn (to remember results produced on a previous trial and to modify the operation accordingly in subsequent trials) or to reason (to analyze the results produced in similar operations and select the most favorable outcome). Today, applications of artificial intelligence include voice recognition, robotics, neural networks and expert systems (i.e., systems that can make decisions an expert would otherwise have to make to, for example, forecast financial performance, or diagnose illnesses).

Deloitte has some YouTube videos up about Tech Trends in 2014 — here is their video on Cognitive Analytics:

But why AI?

AI is a dynamic, disruptive technology that is quickly maturing in development through myriad applications and is becoming commercially viable with high-potential adoption rates. AI is coming of age. In other words, for companies, investors, and lead users waiting for the right timing, the time was yesterday!

Gartner argues that the current trends in applications of AI are not only bounded to machines replacing humans, but also augmenting humans with technology, as well as humans and machines working alongside each other:

The first thing is to acknowledge that artificial intelligence and smart machines – including robots – are going to represent a juggernaut trend for the next decade. Re-evaluate tasks that you thought only humans could do – can you redesign how processes are performed and decisions are made within your enterprise based on new smart technologies? You’ll need to reassess this every year or two as the capabilities improve.

Look in particular at how to balance tasks between humans, software and robots to take best advantage of the abilities of each. There are still many challenging endeavors – including chess – where the best solution is a human working together with a computer.

Hire an ethicist or two, as ethical tradeoffs are going to be one of the few areas that remain firmly in the domain of humans. Computers may be able to answer a question faster and more accurately than any person, but it’s going to be the humans who decide what is the right question to ask.

hype-cycle-prSource: Gartner August 2013

For more information on Gartner’s 2013 Hype Cycle, please click here.

Can you talk more about current business trends for AI?

The Washington Post reported last week that VC deals are happening, Google acquired the AI company DeepMind, and Facebook opened up a new AI lab (and also recruited one of the top minds in AI, Yann LuCun of NYU).

Here is an interesting article on how AI is poised to transform e-commerce.

Additionally, according to McKinsey, AI’s role in automating knowledge work is one of ten IT-enabled business trends that will alter the business ecosystem for the next decade. Unfortunately, this high rate of innovation and adoption will inevitably lead to changes in human capital structure and also lead to innovations in the management of human capital:

Physical labor and transactional tasks have been widely automated over the last three decades. Now advances in data analytics, low-cost computer power, machine learning, and interfaces that “understand” humans are moving the automation frontier rapidly toward the world’s more than 200 million knowledge workers.

Powerful productivity-enhancing technologies already are taking root. Developments in how machines process language and understand context are allowing computers to search for information and find patterns of meaning at superhuman speed. At Clearwell Systems, a Silicon Valley company that analyzes legal documents for pretrial discovery, machines recently scanned more than a half million documents and pinpointed the 0.5 percent of them that were relevant for an upcoming trial. What would have taken a large team of lawyers several weeks took only three days. Machines also are becoming adept at structuring basic content for reports, automatically generating marketing and financial materials by scanning documents and data.


At information-intensive companies, the culture and structure of the organization could change if machines start occupying positions along the knowledge-work value chain. Now is the time to begin planning for an era when the employee base might consist both of low-cost Watsons and of higher-priced workers with the judgment and technical skills to manage the new knowledge “workforce.” At the same time, business and government leaders will be jointly responsible for mitigating the destabilization caused by the displacement of knowledge workers and their reallocation to new roles. Retraining workers, redesigning education, and redefining the nature of work will all be important elements of this effort.