In shows like HBO’s "Westworld" and AMC’s "Humans," Hollywood pits robots, with artificial intelligence, against humans.  

Half a century ago, a science fiction film about a space mission planted the first seeds of doubt about just how the human race could coexist with man-made sentient beings.

“Consider the fictional robot HAL in '2001: A Space Odyssey,' ” said Ken Ford, a computer scientist and founder and CEO of the Florida Institute for Human and Machine Cognition, in Pensacola, which has won awards for its robotics innovations.

HAL eventually turned on its master in that classic film, sending shivers down the spines of moviegoers everywhere. Some of that wariness about artificial intelligence still exists, but Ford said the fear is unwarranted, and in the case of fictional robots, misplaced.

“Ironically, in fictional accounts of superhuman AI run amuck, the problem is often not that the machine was too intelligent, but that it was too human … too emotional,” he said.

The scientist focuses on the advantages of teaming man and machine.

"We are not in competition with our inventions," he said. "Rather than intelligent computers becoming our rivals or doing our thinking for us, they will (and have already) become our amplifiers and teammates. Perhaps AI should stand for Amplified Intelligence.”

As the head of the Institute for Human and Machine Cognition, Ford knows a thing or two about robots. At the largest international robotics challenge in 2015, his team won first place among all American teams, beating the likes of MIT and Carnegie Mellon.

Internationally, IHMC took second place, with robots capable of driving cars, blasting through plasterboard and climbing staircases.

“These robots learn by example and improve through tasks, not unlike humans,” said Brent Venable, associate professor of computer science at Tulane University, who has a joint appointment at IHMC.

Venable, who formerly worked on NASA’s Mars Rovers, now has a grant from the Future of Life Institute, funded by Elon Musk to investigate safety in self-driving cars, such as Musk’s Tesla.

“These vehicles have far more information than a human driver, and with 360-degree sensors, it sees things from all perspectives simultaneously. It knows how fast the cars around it are going and can take evasive action in the event of an impending accident.”

But, when artificial intelligence is the driver, ethics programming becomes critical.

Imagine a self-driving vehicle cruising down the road with a passenger in the front seat  when a child suddenly darts out in front of it, Venable said. "Its only choices are to hit the child or swerve into a tree, which could injure its passenger. How do you program a computer to deal with these sorts of moral dilemmas?"

"It’s our job as ethicists to figure it out, but it’s not easy,” she said.

Nor without controversy.

When Musk recently announced that he was venturing into the medical arena, building implantable brain electrodes which would help “ordinary” people keep up intellectually with their robotic partners, eyebrows were raised.

Whatever the future, there are stunning advancements already, some through the research at IHMC.

For example, for patients with Alzheimer’s disease who might be fearful of authority figures, researchers have created a friendly, chatty computer avatar that looks like a dog, Ford said.

“He interacts with the patient in a two-way chat. The patient tells the friendly dog that the family went on holiday in Morocco. The 'dog' has been programmed to know what a family and a holiday are, then at lightning speed hooks up to Wikipedia to learn about Morocco, then connects instantly to TripAdvisor to find interesting things to do there."

Without missing a beat, the avatar might inquire about whether the patient's family attended a famous festival in Morocco or tried the local cuisine.

“It’s only 600 milliseconds between utterances in a dialogue,” said Venable, “so these avatars are computing on the fly, enabling them to engage in normal conversation.”

However, with technology moving at warp speed, there has been concern about privacy issues and bias.  

“Amazon was recently criticized for using artificial intelligence to determine where it would and would not deliver packages,” said Shawn Rickenbacker, a New York architect and an Artificial Intelligence Research fellow at Tulane University who teaches the AI and social innovation course, Humans + Machines.

“Many refer to this as digital red-lining, when computers determine why a package will be delivered to one ZIP code, but not another,” said Rickenbacker. “These are loss-prevention models, predicting whether it’s feasible to provide urban services to certain geographical areas … and these are judgment calls. Can we take out racial, economic and other biases? AI is being billed as a tool that can eliminate bias, but humans train these systems, and programmers are not neutral.”

If it seems that the future has quickly descended upon us, consider that we’re unaware of much of it. In a recent online course at Georgia Tech, a "teaching assistant" called Jill Watson turned out to be an AI program. Students were interacting with robots.

It may sound like a Brave New World, but Ford feels the future is bright.

“Calculators didn’t replace mathematicians, nor were authors replaced by word processors," he said. "This is not to say that many jobs will not go away, but new ones will be invented.”