Forward View
With quarantine lasting longer than anticipated, I've been making good progress on a book project -- Foresight Investing. Just finished writing up a chapter last weekend on the disruptive trend of Automation. I'll share with you now... fun stuff!
The Future of AI and Robotics
 
Early neural networks were developed by Warren McCullough and Walter Pitts at the University of Chicago in 1946. Research bloomed in the 1960’s, faded, and then saw a resurgence in the 1980’s.  

It wasn’t until the development of advanced graphical processing chips in the early 2000’s that AI was able to make sense of the world. 

Just ten years ago, autonomous vehicles were hitting curbs and had problems yielding to pedestrians. Five years ago, Google’s AI couldn’t tell the difference between a chihuahua and a blueberry muffin. 

Today, Google can make fluid translations between dozens of different languages. The logo for my investment advisory firm was crowd-sourced to a designer on the other side of the planet who didn’t speak a word of English. We communicated by email using Google translate. 

Artificial intelligence has moved past its infancy and is now a curious toddler. 

Which leads me to wonder if artificial intelligence can have a childhood -- of sorts. Maybe AI should have a safe place where it can grow and experiment and finger paint. AI should learn fairness, how to play nice, and share cookies. We had kindergarten; AI needs it, too. 

A few years ago, Open AI created a universe of game worlds where AI-powered avatars can learn how to navigate the world, explore their environment, and figure out how things work.  AI can learn more quickly if it plays in a safe space. 

AI has gotten extremely good at games, starting with Pong, Breakout, Chess, and finally, Go – the deepest game in human history. 

At some point, AI’s may “learn how to learn” and write their own code. Singularity is the point in time when artificial intelligence surpasses human intelligence. If AI can even reach 50% of adult human intelligence, but has the capacity of improving itself, the gap between humanity and AI could close surprisingly fast. Just as significantly, any AI reaching some level of intelligence could easily transfer those capabilities to other AI’s. In this way, machine intelligence could “go viral.”
Mathematician Vernor Vinge predicted that the singularity will occur by 2023 (coming soon), while futurist Ray Kurzweil says that 2045 is more likely. If it happens… it could be within our lifetime.

We still need a few more breakthroughs before we truly have thinking machines. For starters, we don’t completely understand how our own minds work, much less how to program new ones. 

There is considerable debate within AI circles on the best routes for machine learning. Pedro Domingos, a computer science professor at University of Washington, explains that there are five different “tribes” within the machine learning community, each with its own different theory of mind.
A new thing to come out of deep learning is something called generative design, which involves giving AI a set of constraints and then letting it “build” things.  Autodesk’s DreamCatcher software can do this, and the results are often very organic-looking and in many cases use considerably less material than designs made by people. Some of these designs are terrific applications for additive manufacturing and 3D printing, because many of these objects cannot be produced using traditional methods.

Artificial intelligence may appear somewhat alien by human standards. In all likelihood, it could appear geeky, obsessive, and nerdy. This is perfectly fine if you are using AI to solve an engineering problem, but less desirable if in a personal digital assistant.

There is a big difference between comprehending language, and truly understanding what we are trying to express.

It is estimated that 7% of communication is based on words, 38% tone of voice, and 55% on nonverbal behavior (facial expressions and body language). Over time, personal digital assistants may be tuned into our emotional state, through a reading of our heartbeat, temperature, skin conductivity, and physical expression. They might even become more aware to our emotional state than our partners.

As such, affective computing is advancing rapidly and provides many real-world applications. Nemesysco specializes in emotional computing for stress analysis (useful in customer service) and fraud detection (transactions of any kind). Beyond Verbal is another emotion analytics company that identifies and categorizes feelings based on the human voice.

We’ve come a long way over the past few decades.  When I was a teenager in the 1980’s, it was fun going to the arcade to frantically mash buttons on a machine. What happens when machines get good at pushing our buttons?

AI and Mental Health

AI and voice assistants have the potential to make mental health counseling relatively universal. We may find ourselves spending more time talking to our smartphones than we are talking to people on our smartphones. Think about that.

Whether is through meditation apps or always-on digital therapists monitoring our emotional state, we are going to have lots of help.  We’ll be needing it.

AI and Decision-Making

Investing can be stressful, which is why the finance industry was among the first to embrace algorithmic decision-making. Old-school financial analysts have mostly been replaced by the new kids on the block, the “quants.” Many crowded trading floors have since gone silent, and the winners at gathering client assets over the past two decades have been robo-advisors and passively managed index funds.

Bridgewater, the largest hedge fund in the world, wants AI to make 75% of all investment decisions by 2022. More than half of the Exchange Traded funds (ETF’s) trade the market with a defined set of rules – few human managers needed.

There are still opportunities for active managers. I’ve noticed several instances over the past few years when AI’s struggled with markets when conditions deteriorated rapidly. In these times, AI-based strategies become perfectly optimized for a world that no longer exists.

Knowing what you don’t know can be an essential tool. We are years away from that level of self-awareness in AI.

AI's as Customers and Gatekeepers

In the past, we’ve thought about AI primarily on the side of production , but we should start thinking about AI as a gatekeeper to the consumer.  

As Kim Bates at Brain Reserve says, “The customer journey of the future will begin to emerge as Business to Robot to Consumer (B2R2C).” Virtual assistants will own the consumer relationships of the future.

Google, Amazon, and Facebook have done an amazing job of mining through our data to find out what we really want. Increasingly we may find ourselves marketing to algorithms. 

You’ve already done it if you have ever tweaked a web site or blog post for search engine optimization (SEO).

As AI’s become a bigger part of our lives they will know more and more about what we like, want, and need.  But they’ll get even better with two upcoming boosts in hardware technology…

Quantum Computing

The world around is an uncertain place. Subatomic particles are both particles and waves, yet we seldom notice when they blink in and out of existence. Similarly, we have difficulty predicting the exact location of an electron and can only probabilistically describe its orbit around a nucleus.

Back in 1981, Nobel prize-winning physicist Richard Feynman suggested that a quantum computer could solve difficult problems by better simulating the uncertainty of nature.   With quantum computers this happens at warp-speed.

Eventually, we’ll go from a science based on slow “trial and error” in the physical world, to one in which experiments are fantastically accelerated by being performed in digital simulation , bringing a “golden age of discovery in new materials, new chemicals, and new drugs.”

Neuromorphic Chips

In an effort to simulate the brain, Intel, has built networks of artificial neurons into an experimental chip sets. They claim that the new breakthrough will enable computers to “think” at the level of small animal. Gartner predicts that by 2025, these chips could replace the current GPU’s used as the main hardware for AI.

Computer processing today excels at fast and precise application of rules and mathematics, while the brain performs better at tasks involving ambiguity, complexity, and creativity.

Perhaps by “re-wiring” the brains of our technology, we can make it more human.

Robotics

Moving past the brains of AI, let’s take look at their bodies.

A robot is a machine, usually one programmed by computer, that is capable of carrying out complex actions. The origins of the word “robot” comes from the Slavic root for “work”, and was first mentioned in the 1920 play R.U.R (Rossum’s Universal Robots), by Karel Capek.

In many regards, robots are the second wave of the Industrial revolution that was started in the late 1700’s. The differences between robotics and traditional machinery include flexibility, programmability, decision-making, and mobility.

While Capek originally envisioned robots as being humanoid in form, robots come in a broad range of specifications. This enables them to excel at many tasks.

In this next section, we’ll look the many industries in which robots may be deployed:

Agriculture:   At the turn of the 20 th century, half of all Americans considered themselves “farmers.” That number has since fallen to less than two-percent, and may decrease even more.

Weeding, harvesting, planting are labor-intensive activities moving from immigrant labor to robotics.  Walmart (WMT) recently filed a patent for robot bee pollinators to increase production of apple, peach, blueberries, and almond crops. Precision agriculture will be a big trend for the next decade.

Meanwhile, the protein business could shift with the rising popularity of lab-grown meats and plant-based alternatives. Vertical farms will also greatly reduce the distance between farm-to-table.

Autonomous Vehicles:  Expect self-driving vehicles to become relatively commonplace by 2025.  Every automotive maker has plans for autonomous vehicles – except for just two holdouts. Luxury cars made by Lamborghini and Ferrari will remain human-controlled for the foreseeable future.

This revolutionary technology will have many implications for society:

·           Car ownership may become less attractive . If “transportation as a service” picks up, we may need fewer cars and parking spaces.

·          Without needing to buy large capacity SUV’s that do everything, we may have MUCH more variety in vehicles design . We can pick-and-choose the right vehicle for every purpose. Take a two-seat commuter pod to work, or a car with a built-in desk. For date night, you may request a convertible or a sports car. Over the weekend, go to the hardware store in a pick-up truck.

·           Cities will need to find new streams of revenues without traffic and parking violations. However, they may benefit from a building boom as parking lots and garages are repurposed.

·          There may be more cars on the roads – including empty vehicles making trips for self-fueling and picking up new passengers.

·          “Sleeper cars” will become a viable alternative to air travel for many destinations .
Hotels may feel some competition. Fitness clubs may see a resurgence as a place to freshen-up before meetings.

·           Highways may become more orderly over the next two decades, and heavily used.  Some urban planners are designing “smart cities” with narrower, pedestrian-friendly street s, as they believe that autonomous vehicles may navigate with more accuracy.

The biggest implication for autonomous vehicles may be decline of drivers needed for the for Ubers, taxis, trucking, and delivery.

Cybernetics:  Wearable exoskeletons may one day augment the physical capabilities of future workers.  These come in a variety of configurations, including upper-body, lower-body and full-body models. Most of today’s models range in price from about $4,000 to $6,000. Ford, Boeing, Toyota, and others have all augmented a portion of their workforce with this technology.

In the military, exoskeletons are beings explored as a way for troops to carry heavy equipment for long periods of time.

“It’s not designed to give you superhuman strength; it’s designed to give you superhuman endurance,” says Zach Haldas of Ekso Bionics.

In the next step, active (powered) exoskeletons may allow for greater strength and mobility.

Drones:  One of the first robot stocks that I ever invested in was a company spun-out from M.I.T. called iRobot (IRBT). They specialized in building robots deployed for jobs that were “dirty, dull, and dangerous.”

Similar to General Electric, iRobot was an odd blend of consumer appliance company and military defense contractor. Some of their bots would scrub floors… others would remove explosives on the battlefield. Some would simply fascinate pets.

These days, the hottest drone applications are for food delivery and security.
Starship is a private company out of San Francisco that makes delivery bots that have been tested in over 100 cities. Their bots look like oversized beer coolers on wheels. They use machine learning to detect objects, and share the sidewalk with pedestrians.

Security cameras are free to roam and report. Knightscope, designs and builds drones for monitoring activity in malls, parking lots, neighborhoods and offices.

Manufacturing:   The big four of factory automation are ABB (ABB), Fanuc (FANUY), Kuka (KUKAY) and Yaskawa (YASKY)

Some industrial robots now cost about $23,000 – roughly the annual wage of an overseas factory worker. That makes for a rapid investment payoff. The are other advantages, too. Robots don’t go on sick leave, take vacation, or file complaints with HR.

The number of industrial robots is doubling every 7 years or so, and could well replace most industrial workers by 2040.

Because of high labor costs and large domestic markets, the biggest beneficiaries of automation will be S. Korea, Japan, Germany, and the U.S.

The outcome of industrial robotics will likely be higher productivity and wages for those who keep their jobs, and lower costs for consumer. Shareholders may be well-rewarded.

Medicine:  The first breakthrough robots for medicine came from Intuitive Surgical (ISRG) a decade ago. These “co-bots” (collaborative robots) assist in precision surgery. As a result, physicians are able to make smaller, more precise, incisions that heal more quickly are and less likely to become infected.

At Akara Robotics, “Violet” is an autonomous cleaning robot that rolls through rooms and hallways, sanitizing surfaces with ultraviolet light. It’s a hi-powered light bulb on wheels. “Stevie” is a social robot spending time in nursing homes, checking in on residents and making sure that they take their prescriptions.

Telepresence robots may bring in family and physicians to bedsides via videoconferencing. But, they could also enable patients to “visit” the world remotely.
In a few years, there will be an enormous demand for personal health aids that will assist the elderly in activities of daily living – eating, bathing, moving, etc.

Military:  The military has been an early-stage investor in robotics through companies such as iRobot, Lockheed Martin (LMT), and Raytheon (RTN).   In the future we’ll see bots on the ground… and in the skies.

Bots can fly faster and handle higher levels of G-force than human pilots. Within the next three years the U.S. Airforce Skyborg drone will replace the F-16 combat fighter.

Restaurants:  Fast-food kitchens now have the Burger Bot, (which can serve a fresh-made hamburger every 10 seconds), the burrito bot, and a pizza bot. The University of Texas employs a robotic barista (the “Briggo”) that serves coffee to 10,000 students a day.

We are starting to see completely automated restaurants, where food orders are taken via touchscreen, with table service performed by drone. No tipping required….

Retail and Distribution: When Amazon (AMZN) was first started as an online bookseller, many of their orders were filled by guys skating around in warehouses on rollerblades. Workers are still on wheels, but things have changed.

In 2012 Amazon paid $775 million for an automation company called Kiva systems. Kiva makes robots for inventory management; these bots roam around the floor and move warehouse shelves without human intervention. Forty-five thousand Kiva robots delivered three-hundred items per second during Christmas season.

Amazon also made a $15m early stage investment in ReThink Robotics. This company makes a robot which costs about $30,000, has visual recognition, and the ability to pick and pack objects from the shelves delivered by Kiva robots.

Amazon is also pushing towards drone-based delivery, and now has developed more than 20 generations of flying drones.

You can easily find the pattern here… Amazon is its way to becoming a pioneer in completely automated order fulfillment, from shopping online to delivering a package at your door.

At warehouses, Amazon now uses 3,000 robots per every 10,000 employees – roughly twice the automation density of the U.S. auto industry.

The company is continuing to pioneer the live shopping experience, too. In a prototype Amazon Go store in Seattle, customers scan their Prime membership codes at the entrance. A combination of cameras, weight sensors, and RFID sensors track purchases.  Check-out simply involves walking out the front door with a basket full of stuff, which is charged automatically to the customer’s account.  At this point, the future of shopping starts to resemble shoplifting.

Robotic Process Automation: This isn’t so much about physical robotics as it is about using artificial intelligence to do standard office work.

RPA’s are used by human resource departments, banks to manage credit limits. Insurance companies are using RPA to manage claims.  Airlines can automate refund requests. Bill paying can also be easily automated.

So far, RPA technology is still in the early stages. It can handle tedious tasks to make existing employees more productive. Things won’t get serious until RPA’s start making their own decisions.

The Internet of Things (IoT)

While the internet connected computer and people, the Internet of Things will connect stuff .  Cars, televisions refrigerators, traffic lights, trash cans, and even some frying pans now have a connection to the internet.  A few implications:

·          Everything is trackable.  Few objects get lost or stolen once they are on the internet.

·          Everything is chatty, too.  For example, your car reminding you that it needs its filters changed and oil replaced.

·          Everything gets a lot smarter.  The frying pan knows how long it takes to poach an egg, even if you don’t

·          Everything wants to buy stuff for you. Your empty refrigerator might remember to order milk and eggs, even if you forget.

By 2030, there will be five times as more objects connected to the internet than people. And these objects will all need sources of power, sensors, and microchips.

3D Printing

Most people are familiar with the basic technology… think about “printing” entire objects one drop of ink at time.

While 3D printers would once only produce objects using plastic, they can now create things with an amazing range of materials, including metal, glass, concrete, chocolate, and even organic materials.

It opens up all sorts of possibilities for customization and efficiency. In additive manufacturing, you only print what you needed. Not a drop is wasted. Design systems will let you “stress-test” and edit objects digitally before you print them into existence.
While current 3-D printers are slow, they have to potential to disrupt multiple industries.

Consider housing.  Mighty Buildings has been able to satisfy U.S. construction codes while printing single-family homes at 1/10 the labor costs, and 1/3 of the total cost.

Using a 3D-printer that extrudes fast-drying concrete, Chinese company Win-Sun 3D printed ten single-family homes in less than 24-hours., at a cost of less than $5,000 each. They have since moved on to printing a 5-story apartment building over the course of a weekend.

With AI and robotics, we get cheaper housing, fresher food, less paperwork, easy access to transportation, and more affordable health care. This is all sounding pretty good, right? We need to talk about just one more thing…

Job Displacement

Kai-Fu Lee, an former top executive for Google in China, worries that AI advances will may be more disruptive to workers than the internet. He estimates that 40 percent of the world’s jobs will be lost to automation in the next 15 years. Lee also says that “AI will make phenomenal companies and tycoons faster, and it will also displace jobs faster, than computers and the internet…. It’s going to be a serious matter for social stability.”

Research is consistently finding that the effects of automation on the labor market are more highly concentrated on lower-paid, lower-skilled, and less-educated workers.
This, in turn has the potential of magnifying economic inequality.  If not handled correctly, automation could be the cause of significant social unrest.

The first wave of industrialization hit assembly-line work and farming. More recently, it has accelerated substantially in the post-COVID world, by replacing human labor with “clean” automation, particularly in the retail and fast food industries.

McDonalds and Taco Bell have made the shift towards touchscreen kiosks for ordering. Checkout lines are automated, and now even the kitchen is being staffed by robots.
In the next wave, we will see more displacement in white collar jobs. This started out with travel agents, but may include telemarketing, accounting, retail sales, real estate agents, investment management, and technical writing.

The point is that artificial intelligence and automation are beginning to take over traditionally cognitive work.  We all need to get really comfortable with being a little uncomfortable at all times.

People who keep their jobs are going to feel more stressed. It is because routine work has been automated. What remains is all “non-routine” stuff. We’re either “putting out fires” or planning for the future.

We may go through an awkward period over the next 2-3 decades. It is estimated that between 9 and 47 percent of all jobs in the U.S. may be threatened over the next two decades.

Workers are low-skilled or who have not re-skilled may find themselves out of employment. This could stir up all sorts of social disruption. Be ready for this.

On the positive side, technology has a way of making things cheaper. We might not have jobs, but then again, we might not need as much money to maintain a good quality of life, either.

Technology optimists point to a future where we might have a Universal Basic Income. Essentially, the UBI is a monthly stipend that is provided by the government to provide a baseline standard of living…. regardless of income.

Erik Brynjolfsson, a researcher of the digital economy at MIT’s Sloan School of Management, says that “The idea of a basic income is a good one in a world where robots do most of the work, but we probably won’t be there for 30 to 50 years.”

In the meanwhile, there are two ways to adapt:

The first approach is to get more comfortable with technology and learn how to use it to your advantage.

The second approach is to hone your skills at simply being human. For the time being, technology is a lousy replacement for the people in your life.

In order to be successful, you need to be smart, empathetic, or good with your hands, preferably two out of three .

·           If you are smart and empathetic, you will be tremendously successfully in the professional world.

·          If you are empathetic and good with your hands, you will have plenty of opportunities, too.

·          If you lack social skills but are good with your hands and reasonably smart, people will still put up with you.
So, there is hope for most of us!

Going Further Out...

Soft Robotics and Flexible Electronics

At MIT, researchers are experimenting with Belousov-Zhabotinsky gel, a possible material for artificial skin. It can feel pressure, receive chemical signals and even self-heal.

Hydro-gels often have a similar water content to the human body, about 70%. But they can do some unusual things, too, such as memorize shapes, change color, and walk underwater.

At Shinshu University, a team lead by Minoru Hashimoto have developed a wearable PVC gel that contracts with an electrical pulse – much like human muscle. The idea is that flexible “second skin” prosthetics could be worn as wearable augmentation for the elderly.

Stretchable digital bandages have been prototyped to track user health and stimulate healing.

But this is all just barely touching the surface. A blending of synthetic and biological materials could create entirely new artificial hybrid life forms.

Carmel Majidi at the Carnegie Mellon Soft Machines Lab observes that once again, the cutting-edge of science has already been explored by nature. According to Majidi:

"The energy density of batteries is 10–100 times less than that of the sugars and fats used to power natural muscle. Therefore, replacing battery powered actuators and electronics with biohybrid materials that run on chemical fuel could lead to dramatically lighter and more autonomous soft robots."

In the future, we could have robots that think, understand emotion, feel pain, hunger, and even self-heal. Almost human? 

Digital Autonomous Organizations

Futurists are excited about Distributed Autonomous Organizations (or DAO’s), which serve many of the functions of traditional Corporations, but are run and managed either algorithmically or by Artificial Intelligences using blockchain technologies. DAOs could manage businesses in cyberspace, or the real world (via the Internet of Things). 

This all starts to get fascinating. While DAO’s could eventually have corporate “personhood” status, they also have the potential to accumulate significant resources from financial transactions. The result could be a rapid accumulation of wealth controlled by…. nobody in particular.

Technology and Transhumanism

Transhumanism is the idea that we can transcend our biological limitations. The last frontier of digitization is…. us.
 
Our technology preferences may significantly influence the direction of human evolution over the coming decades. There are at least three major pathways for this:

The first would be biological , mostly by way of gene editing. Over the next 50 years, I would expect this technology to become inexpensive and broadly accessible. By that time, we will have the genome reasonably well-mapped, with a solid understanding of the genetic factors for disease (and perhaps enhancement).

The second would be more of a cybernetic approach using embedded or wearable devices. A cooperative merger between man and machine (or AI).  We seem to have a cultural preference for this path right now.   

In the third route, digital emulators could upload our memories into the cloud, where versions of ourselves could live forever in a virtual reality simulation. Copies of our past experiences could develop and experience a multitude of different lives – even simultaneously. Learning, aging, and personal experience could be greatly accelerated or decelerated, depending on the clock speed of the world.

Eventually our minds could be substrate independent – free to experience lives that are biological, mechanical, or digital. At that point, our bodies become readily interchangeable and gender is completely temporary or irrelevant.

Jim Lee, CFA, CMT, CFP
Founder, StratFI
Upcoming Free Webinar:
Aftershocks and Opportunities - The Future of Financial Services, Banking, and Insurance

Jul 30th, 2020 16.00 – 17.00 UK BST (11.00-12.00 EST)


I n this cutting edge insights session, a panel of industry experts will go beyond the excitement and noise around Fintech, to explore the future of financial services, banking, and insurance over the next five years. They will explore the emerging societal and customer shifts and needs, potential evolution paths, technology developments, the possible impacts of process automation and digital transformation, new market opportunities, impacts of digital currencies, regulatory considerations, ethical factors, business models, and challenges that could shape the scenarios that could emerge.

They will discuss how the pandemic is impacting current and future activity and R&D plans in the sector, and how we can lay the foundations for the adoption of exponentially advancing technologies such as artificial intelligence. The panel will also explore what our world could look like when customers can design and customize their own products, and financial services becomes an invisible layer in every activity we undertake.

Rohit Talwar – CEO, Fast Future, UK (Moderator)
James H. Lee – Founder, StratFI, USA
Brett King – Industry Futurist and Entrepreneur, USA
Cliff Moyce – Industry Expert on the Future of Banking, Insurance, and Private Equity, UK
Elaine Mannix – Global Insurance Sector Leader, UiPath, Ireland
Miranda Mantey – Strategy and Foresight Analyst, ATB Financial Innovation Lab, Canada
Stefano Tresca – General Partner, Purplehat Capital Investing in Fintech and Financial Inclusion, UK  

This webinar is sponsored by UiPath - the robotic process automation specialists focused on using the transformative power of automation to liberate the boundless potential of people. www.uipath.com
Disclosure: Information contained herein is for educational purposes only and is not to be considered a recommendation to buy or sell any security or investment advice. Securities listed herein are for illustrative purposes only and are not to be considered a recommendation. The author may personally hold positions in securities mentioned.

Copyright © 2020. All Rights Reserved. Visit us at  www.stratfi.com