<- Back to blog start page

Cobots

Collaborative robotics: Building robots that can safely work with humans

Have you ever seen a manufacturing robot in action? Watching these things whisk around something like a two-ton car makes it clear, at a visceral level, why there are fences around these robots. You definitely don’t want to get into the way.

By contrast, collaborative robotics is an emerging technology that enables robots to work together in close proximity with humans. This type of robotics has the potential to transform manufacturing, logistics, agriculture, medicine, and other fields.

A quick note on terminology, collaborative robots are also called cobots.

In order to explore some recent developments in cobots, I used the following Mergeflow data overview snapshot (click on the image to see the snapshot):

A data snapshot on collaborative robotics, or cobots, featuring venture capital investments, market estimates, patents, R&D, and other information.
A data snapshot on collaborative robotics, or cobots, featuring venture capital investments, market estimates, patents, R&D, and other information.

I started by looking at some companies from this snapshot (actually, I also went a bit further than the snapshot and looked at companies and R&D in some more detail–you can do this too, if you like, by signing up for a 14-day free trial of Mergeflow).

Rapid Robotics: Machine operating cobots for rent

Rapid Robotics makes cobots that perform machine operator tasks. And they are called “Rapid Robotics” because their cobots can be deployed very quickly, within hours. This is orders of magnitudes faster than having to program a more traditional robot.

Rapid Robotics don’t sell their robots. You can rent them, for $2,100 / month. They build cobots to “help manufacturers solve labor shortages”, according to their website. And they have an interactive cost calculator that directly compares the costs of their cobots to the costs of a human machine operator:

Interactive cost calculator from Rapid Robotics' website.
Interactive cost calculator from <a href=httpswwwrapidroboticscom target= blank rel=noreferrer noopener>Rapid Robotics website<a>

In 2021, Rapid Robotics raised $36.7M Series B from Kleiner Perkins, Tiger Global, NEA, Greycroft, Bee Partners, and 468 Capital.

MOV.AI: The ‘word processor of robotics’?

(I’ll explain below why I think MOV.AI might be the ‘word processor of robotics’)

Lisbon-based MOV.AI makes software that makes it easier to build and operate robotic solutions (not just cobots but other kinds of robots as well). Here’s the problem they solve: When you think about it, a robot is a system that requires computing units, sensors, and actuators to work together to perform some task. And there is an open source software called ROS (Robot Operating System), which is widely used across industries and applications. But when you want to actually build, deploy, and operate robots, ROS only gets you so far. For example, if you exchange one sensor for another, you’ll probably have to re-write a lot of ROS code for the new sensor. With MOV.AI’s Robotics Engine Platform, you don’t have to do this anymore. This is because MOV.AI adds levels of abstraction to ROS, so you’ll have to worry less about lower-level implementation details.

Schematic of MOV.AI's Robotics Engine Platform. Image from MOV.AI's website.
Schematic of MOVAIs Robotics Engine Platform Image from <a href=httpswwwmovaitechnology target= blank rel=noreferrer noopener>MOVAIs website<a>

This is why I think it might be called ‘word processor of robotics’: If you are old enough to remember writing papers etc. with LaTeX, you know how much work it can be just to make a paragraph of text appear in a certain font, size, color, width, etc.. But now with word processors like Google Docs or Word, you simply mark the text, set a few parameters in a GUI, and you’re done. I think MOV.AI is a bit like this, for robotics.

In 2020, MOV.AI raised $4M from SOMV, NFX, and Viola Ventures.

Micropsi Industries: Hand-eye coordination for cobots

Training a cobot without writing code? Just by showing it what it’s supposed to do, and giving it a bit of guidance? Berlin-based Micropsi Industries helps you do this. They make an AI-powered robot controller called MIRAI (which I assume stands for Micropsi Industries Robot AI). They have a video on their website that shows you what MIRAI can do:

Micropsi Industries’ MIRAI system in action.

According to their website, MIRAI currently works with robots from ABB and Universal Robots. And if you’d like to dig deeper, you can get a description of what’s included with MIRAI, and how you can set it up from Micropsi’s website.

In February 2022, Micropsi Industries raised $30M Series B from Metaplanet, VSquared, Ahren Innovation Capital, Project A Ventures, and Merck’s M Ventures.

Multiply Labs: Cobots for making individualized drugs

Basically by definition, individualized drugs are something that you don’t make in high volumes. It’s not “many instances of one type” but “few instances of many types”. This sounds like a typical “cobot problem”: Rather than building and deploying a robotics solution once and then let it run, you need robots that can be retrained all the time.

Multiply Labs makes such robotic manufacturing systems for individualized drugs. And these systems include cobots. According to their website, co-founder Alice Melocchi had the idea for Multiply Labs when she did a lab tour of her friend and later co-founder Fred Parietti: Alice was making drugs by hand, and Fred was working on robotic limbs.

In 2021, Multiply Labs raised $20M Series A from Casdin Capital, Lux Capital, Founders Fund, Fifty Years, and Garage Capital.

How to teach human interaction to a cobot

So, how can you teach a robot how to interact with a human? How does human movement, motor control and planning work? How can a robot infer what its human collaborator might be up to next? What’s the R&D behind this?

Of course, I can’t provide a full summary of this in just a few paragraphs. But I can give you some starting-off-points for further exploration.

For example, in 2016, a group of researchers from MIT investigated what they call “predictive vision” (= you see something; based on what you see, you make an estimate of what’s likely to happen next). They trained their system with TV shows:

Teaching machines to predict the future

And here’s a survey paper from 2017:

A survey of robot learning from demonstrations for Human-Robot Collaboration

More recently, from 2022, is this paper that also investigates mechanisms that might help robots better anticipate their human co-workers’ actions:

Hierarchical Intention Tracking for Robust Human-Robot Collaboration in Industrial Assembly Tasks

‘Hierarchical intention tracking’ refers to the fact that actions are planned at multiple levels. For example, there might be an overall goal (“assemble this gearbox”) that involves increasingly (or decreasingly?) lower-level task intentions (“tighten this screw in the gearbox so that it holds”).

And here is another recent paper, from April 2022, that investigates how robots that explain their future actions are perceived differently than those robots that simply announce what they’ll do next:

Explain yourself! Effects of Explanations in Human-Robot Interaction

This article was written by:

Florian Wolf

Florian Wolf

Florian is founder and CEO at Mergeflow, where he is responsible for company strategy and analytics development at Mergeflow. Previously, Florian developed analytics software for risk management at institutional investors. He also worked as a Research Associate in Computer Science and Genetics at the University of Cambridge. Florian has a PhD in Cognitive Sciences from MIT.

Get notified when we have new research (once or twice per month):

We only collect and use your information in accordance with our Privacy Policy.