9. Digitization of Touch and Meta AI Partnership, with Youssef Benmokhtar
In this episode, Audrow Nash speaks to Youssef Benmokhtar, CEO of GelSight, a Boston-based company that makes high resolution tactile sensors for several industries. They talk about how GelSight's tactile sensors work, GelSight’s new collaboration with Meta AI (formerly, Facebook AI) to manufacture a low cost touch sensor called DIGIT, on the digitization of touch, touch sensing in robotics, how GelSight is investing in community and open source software, and Youssef’s professional path in several industries.
- Download the episode
- Youssef Benmokhtar’s LinkedIn
- GelSight’s Website
- GelSight’s LinkedIn
- DIGIT’s open source page
- PyTouch library
- 0:00:00 - Start
- 0:00:51 - Introducing Youssef and GelSight
- 0:07:18 - Background of GelSight technology
- 0:08:12 - What’s hard about it
- 0:11:56 - Miniaturization
- 0:14:40 - DIGIT sensor + Collaboration with Meta AI (formerly, Facebook AI)
- 0:21:57 - Bringing DIGIT from research to a product
- 0:28:16 - Having a camera in your finger
- 0:29:58 - DIGIT in robotics applications
- 0:43:04 - Meta’s plans with tactile sensing
- 0:49:12 - On the digitization of touch
- 0:55:07 - On the current state of haptics
- 0:58:06 - Haptics to verify products and parts
- 1:01:06 - Growing GelSight
- 1:06:03 - Open source software and datasets
- 1:14:09 - Slipping objects and heavy loads
- 1:22:44 - Youssef’s professional path
- 1:32:21 - Advice
- 1:34:44 - Useful habits
- 1:36:17 - Links and getting in touch
The transcript is for informational purposes and is not guaranteed to be correct.
(0:00:02) Audrow Nash
This is a conversation with Youssef Benmokhtar. Yousef is the CEO of GelSight of Boston based company that makes high resolution touch sensors. In this conversation, we talk about how GelSight's touch sensors work, their new collaboration with Meta AI, formerly Facebook AI, to make a new low cost sensor called DIGIT, the digitization of touch, touch sensing and robotic, how GelSight is investing in community and open source software. And we talked about Youssef's professional path in several industries. This is the Sense Think Act Podcast. Thank you to our founding sponsor, Open Robotics. And now here's my conversation with Youssef Benmokhtar. Youssef, would you introduce yourself?
(0:00:55) Youssef Benmokhtar
Absolutely. Youssef Benmokhtar, CEO of GelSight and a high tech executive for the last 25 years, and excited about this interview and sharing my thoughts with the audience.
(0:01:10) Audrow Nash
Tell me about GelSight.
(0:01:12) Youssef Benmokhtar
So GelSight is a MIT spin off, we have a very unique technology, we have an imaging based digital tactile sensing technology, which basically allows us and our customers to get a 3d information of anything that sensors in contact with, regardless of the material properties that is in contact with, that's the essence and the core of our technology.
(0:01:41) Audrow Nash
Yeah, and so you have several products that all kind of work this way. Can you tell me at a high level, how they work? Like, how are you looking at? How are you understanding the deformation of the material and inferring touch information? Absolutely. So we have a unique proprietary elasomeric material. That mean, so it basically means it's a material that has kind of elastic properties where it can conform easily to surfaces that is in contact with, but still keeps some rigidity to it and has optical properties to allow basically to be integrated into an imaging system.
(0:02:24) Audrow Nash
I'm picturing like a transparent silicon. Is that something like very similarly to
(0:02:29) Youssef Benmokhtar
that? Absolutely, you're right, it's something very, so it's basically a polymer like material that lets lights go through it. So the essence of our technology is we have this material, we we coat it with a reflective surface. And we basically eliminated with RGB light, and place a camera behind it. And now we have the ability to, to take scans with these cameras, using a technique called photometric. 3d reconstruction. And that's basically the essence of gel sight technology.
(0:03:08) Audrow Nash
Okay, so let's see, I've looked at some of the images, and they're quite cool from this technology. So it looks like you have, so you'll have the gel. And it looks like you have the light shining from different sides of the gel so that it gives you different shadows when the gel is pushed into by an object, is that correct? Or how does that work? Like? How do you set up the lights? Basically?
(0:03:29) Youssef Benmokhtar
Yes, I think the lights is really more about getting the most uniform light possible. at different wavelengths we use, we use red, green, and blue. And making sure that the surface is uniformly illuminated, what each one of these different LEDs. And then we have a proprietary algorithm that's using the images captured at each one of these wavelengths to reconstruct 3d from that.
(0:04:02) Audrow Nash
Why, what's the significance of different wavelengths? why did why are they necessary? Well, that's the principle of photometric 3d.
(0:04:13) Audrow Nash
I guess, I don't know very much.
(0:04:15) Youssef Benmokhtar
So that's something that I'm not an expert on. I think that's something I co founder could better explain, since he's the kind of the inventor and the one who developed the algorithms, but he's basically, you know, having three images done at different wavelengths. And knowing what those wavelengths are basically, being able to get those three different images, he's able to build an algorithm that that builds 3d information from that.
(0:04:45) Audrow Nash
So I'm wondering, like, maybe this is wrong, but this is how I'm thinking of it. Perhaps having different colors lets you separate them in the image space, so you can get more dimensions like you get different views. So you get different views from each of the lights. Because each of them casts a shadow. And so if you have, say you do red, green and blue lights, now you can separate it into red, green and blue in your image. And you can kind of tease those apart. And that might be more useful for analysis and inferring 3d structure. Does that make it kinda like that? Yes,
(0:05:22) Youssef Benmokhtar
it's kind of like that. Yeah, absolutely.
(0:05:25) Audrow Nash
Cool. And you mentioned that the sensor is reflective. Is that just on the side that's touching some object? Or? Yes, that's
(0:05:36) Youssef Benmokhtar
correct. Yes. Because basically, you need that, that reflect that reflection is what allows you to build that 3d scan, basically,
(0:05:46) Audrow Nash
is it reflective, or just opaque for this guy is actually it is reflective? What, why do you want it reflective?
(0:05:54) Youssef Benmokhtar
Because you want to basically, you know, keep the light within within the system, you know, again, in a very uniform way, so any kind of light dissipation outside will make the computation of the 3d very difficult. So Oh, so you want to have actually, you know, I think opaque is actually another way to look at it, but you need to think about it as it's very opaque, like light is really not going through.
(0:06:20) Audrow Nash
Oh, so you mean, you're keeping all of the light within the system, it probably helps. You don't have to normalize it, or something, because you know, how much light is in it. And thus, you can have very accurate calibration, when it corrects
(0:06:33) Youssef Benmokhtar
graduate, you avoid having to D noise and other things like that.
(0:06:39) Audrow Nash
Okay. So, what I'm imagining that we have the lights behind a piece of gel, the gel at the front of it, and the part that we'll touch things, has some sort of reflective or opaque like thing that doesn't let light through. And you push into the top of the gel, the part with the reflective part, and it deforms it. So that, and then the gel deforms, and you see kind of like shadows or something within the gel that you use to infer how the 3d how the gel is being deformed in 3d, correct?
(0:07:16) Youssef Benmokhtar
Yep, you got it.
(0:07:19) Audrow Nash
Now, let's see what So how long has this method been around? So the company was founded in 2011, I believe the first papers describing the concept were a few years before that. Yeah, so I want to say maybe 2009 is the first time that this kind of idea was described. And then our co founders, Ted Ellison, who is a professor at MIT, and his student, Chemo Johnson, wrote those papers, and got actually a lot of interest, not only from academia, but also from industry. And that gave give them an idea to say, Well, why don't we just, you know, found his company and and, and try to do something with that. Just respond to the, you know, what, what they felt was a strong demand signal.
(0:08:12) Audrow Nash
Hmm. Why? I'm just wondering, what makes like, I'm sure that, like, so I have a little bit of experience in computer vision. And so thinking about like, Okay, we want robots to touch things. Now using some sort of gel and looking at how it deforms, to me, it seems like a, like, kind of an AI solution a lot of people would come to, in terms of how to understand touch. But I think that a lot of companies have tried this kind of approach. And I'm sure there's something hard about it, that has stopped a lot. Like the fact that this technology is taking so long, or is like just coming about now, what's been so hard about this technology.
(0:09:05) Youssef Benmokhtar
I think what's been hard, you know, in, especially in the robotics, you know, field is that, you know, sensors were bulky, expensive, hard to calibrate, so hard to get actually repeatable performance over time. And, you know, really just low resolution. And when I mean low resolution, I mean, it's really low resolution. So what we're hoping at GelSight is that we are solving all of those issues, you know, with the, whether it's the DIGIT which is now you know, something that is, you know, completely, you know, mountable on a robotic hand, right. That's what it was designed for. With a price point of $300. That's what we announced with Facebook. It's available for $300
(0:10:01) Audrow Nash
or meta now?
(0:10:04) Youssef Benmokhtar
Yes, absolutely, absolutely. Right. And also, we because we're imaging based, right, the resolution of our sensor is actually directly linked to the resolution of the sensor, the cameras that we're using the image sensor we're using. So, you know, in the case of the digit, I believe it's a, it's a, you know, it's a VGA resolution sensor. So you getting, you know,
(0:10:27) Audrow Nash
so for VGA, that's like, that's one of the display ports. And so is that 640 by 480, or 20? By 480. Correct? No, why would know that? But
(0:10:36) Youssef Benmokhtar
okay, yeah, yeah. So so, you know, suddenly, you know, you have, you know, a million plus pixels to play with all the other technologies out there will give you in the resolutions in the 10s, or hundreds, you know, pixels for the same surface. And now, suddenly, you have this amazingly high resolution available to you at $300, in something that you can mount. So, you know, now, when you starting to think about applications, I think you have something that is actually even better than the resolution of a of a human finger. So it puts things in perspective, I think AI technology helps to, you know, bridge that gap that existed before. That's why That's why we believe it's exciting. And for gel sight. You know, we had also this reputation of being extremely expensive and big, you know, our standard products on the 20 or $30,000 apiece. But hey, we are, you know, we are a company that's able to do things, you know, for different applications. And in this $300, you know, sensor shows that, you know, we can do things at low cost.
(0:11:48) Audrow Nash
Yes, and I want to talk about DIGIT quite a bit. But I want to still talk, I want to make sure I understand everything pretty well, before going into that. So one of the big things that was really hard with getting this into robotics previously was probably that the sensors were extremely bulky, as you were saying. And so how have you guys made it smaller? Like, how, how have you reduced the bolt?
(0:12:17) Youssef Benmokhtar
It's a great question. You know, the, what Wally actually attracted me to jail side is that at the core of the technology, which which I described, and I think you have a good understanding of now, the three pieces, you should think about them as a modular approach, a design approach, so we can really be
(0:12:38) Audrow Nash
modular and that they can each be improved independently,
(0:12:42) Youssef Benmokhtar
correct, and then put back together as a system. So for example, into your question about miniaturization, we can design a gel material a polymer material that is specific to an application like robotics, right? And actually make them in any x, y, z dimension that you'd like, you know, it's almost, you know, infinite from end to end design space. And until that we can, you know, put in LED illumination, you know, in a fairly easy way, these are not big, you know, type of electronics. And, and then the challenge becomes more about the image sensor that you use, and most importantly, the lens and the field of view of the lens. And how close can you get it to, you know, to this elastomeric materials that we have, but you can play with that, depending on what the application isn't
(0:13:38) Audrow Nash
work one more time really slowly. It's so Marek, or what is it? Elasto Merrick sorry. Elasto Merrick. Yeah, okay.
(0:13:46) Youssef Benmokhtar
Yeah, I try to avoid using it from now on, but that's yeah, I'm just thinking they're so
(0:13:51) Audrow Nash
clear pop deformable polymer, Paul, yeah,
(0:13:54) Youssef Benmokhtar
it's a gel. You know, internally, honestly, we call it gel. The easy way that we think about it. But But then it's to your question. So what we did is that we basically found what is the right surface contact surface that would be attractive to robotics applications? And then how small how thin can you make that sensor by playing around with the, with the camera module, basically, and the field of view of that lens? And, and that's how you get something like DIGIT in that kind of size. There are ways actually, that we have developed a gel site to make it even thinner. So it we DIGIT is not the limit of what how small we can make things.
(0:14:40) Audrow Nash
Yeah. Okay, so we keep mentioning digit. Let's, what is digit? Could you tell me a bit.
(0:14:47) Youssef Benmokhtar
So DIGIT is a basically a tactile sensor is a digital tactile sensor imaging base. It's basically in a job site technology and in a meta design. Facebook made a I should say, sorry, has, you know, identify the need to really have this small, easy to integrate into robotics hands and fingers sensor to be able to develop, you know, AI algorithms for a bunch of different applications, object manipulation, you know, pose estimation and things like that. And basically, we are their commercial partners to bring bring that sensor to the world, we share the same vision, we believe that digital touch and feel is the next sensor to be digitized. To do that, yeah, that's right. And to do that, you actually just need to make it easy to use and affordable and small. So that's, that's, that's what we're doing together.
(0:15:53) Audrow Nash
Yeah, so DIGIT is something that meta AI or formerly Facebook, AI, research, I don't know, organization or part of Facebook that does research and development for that. So they created DIGIT and they open sourced it. Correct? Correct. So it's freely available online. And then you guys now are taking this and building it at scale for this so that you can buy it, and it's out of kind of the research lab and into the world. What, so it was a research paper with maybe like a demo and this kind of thing. How long ago? How long ago did Facebook or meta release? This? The DIGIT design?
(0:16:50) Youssef Benmokhtar
I believe it's over a year ago? I don't know the exact date. Wow. But over a year ago,
(0:16:58) Audrow Nash
yeah. And then how did you guys? How did you begin a collaboration to work on this and produce it at scale? Like, how did that how did it occur, that they produce a paper and now you guys are actually manufacturing lots of these? Well, I think the you know, that's also a great question to ask Mehta, but my understanding is that some of the researchers at MIT, I had experience with GelSight during their academic research. And this is where they've learned about tactile sensing, they've learned about the GSI technology, and you know, kind of, you know, use that, that knowledge and gain expertise in academia, and wanted to actually focus on AI, I think more than, than building hardware. But such a hardware does not did not exist, and, and hence the need to design it.
(0:17:58) Audrow Nash
And you mean, it didn't exist, because you guys have your product line, but it didn't exist in a way that was affordable to like, put 10 of them on to iRobot? Hands, one at each fingers end? Correct.
(0:18:12) Youssef Benmokhtar
And in the right form factor as well, you know, we had was way too big, correct?
(0:18:19) Audrow Nash
You know, so did they come to you? And they were like, Hey, we think it'd be really cool if you guys could work on this or, I mean, can I know these details? Or can you disclose them? I'm curious.
(0:18:31) Youssef Benmokhtar
You know, the way I would answer this question is, I would say, we have a shared vision. And, you know, it was very easy to align on this partnership because of that shared vision. Yeah, and I think, you know, when two parties basically have the same ultimate goal, which is, you know, let's bring tactile sensor to the masses. And, you know, make it attractive for people to want to research with it. Then, you know, the rest actually became really easy to do on in a straight if part. And I believe, you know, and I hope that this partnership actually is just at its beginning, and more more will come from it.
(0:19:16) Audrow Nash
Yeah. So So what is the nature of the partnership? Exactly.
(0:19:20) Youssef Benmokhtar
So at the moment, it is really about GelSight, commercializing the DIGIT design, and that's already available for for people to purchase through our online portal. And it really, that's the extent of, of what we have agreed to do at this point. You know, as I said, you know, more is being talked about, I think now, you know, that we've reached that one.
(0:19:47) Audrow Nash
Nice, what can you hint at what additional steps will occur or what they might look like?
(0:19:54) Youssef Benmokhtar
Well, I think, you know, it's going to touch upon a few different things, you know, One of them is, we believe at Dell side that we can have a complimentary offering to the DIGIT sensor that will be attractive to the robot robotics crowd but also to the hobbyist market. And this is basically aligned with our vision that again, that we want tactile sensing to be available for any engineer, any technician, any hobbyist to use in their creative moments, for anything, and I think, to do that, you need to actually think beyond DIGIT and just offer a more universal sensor. So you know, something that could be mounted to robotic hand if you want to, but you can also use it as a digital microscope and a high school class. You know, something like that. Right? So
(0:20:57) Audrow Nash
is that is that the kind of resolution you're getting with this approach? Or Yeah,
(0:21:02) Youssef Benmokhtar
so what we're planning to do is actually having a much higher resolution sensor than digital. But a similar or even smaller form factor. And our partnership with DIGIT would be with with meta would be that we will probably want to leverage their open source community, and make our sensor compatible with with their Python community, you know, libraries and everything else like that. So that we have a kickstart, you know, when people want to use our sensor, they already have libraries accessible to them right away. So that this is where I see the next step being. So this is not something that is done today, maybe it's you know, I'm going to be scolded by, by my metal friends that I'm already talking about something like this. But, you know, regardless of that, you know, this is the direction we're taking. And this will happen, because it's a, it's something that we are jealous I believe in.
(0:21:58) Audrow Nash
And I want to talk a lot more about the different libraries or ways of standardizing this. And it's quite cool if you make a sort of universal sensor, or you may maybe like a universal way of visualizing touch data and relating it. But going from actually creating, alright, so you had DIGIT was released as a paper, you guys agreed to start manufacturing them. I assume that since it was coming directly from the research community, that it wasn't built in a way that's exactly the easiest for manufacturing, or the best way for manufacturing. Can you tell me a bit about those challenges? Because I assume you have to take basically what exists? And then like, make it robust and make it reliable and make or whatever, grow it up for?
(0:22:49) Youssef Benmokhtar
Correct? Yeah, absolutely correct. You know, it's always it's always over simplified, you know, how difficult it is to take something that looks like you can make 10 of them in a lab. And now suddenly, you need to be ready to make hundreds and 1000s That's exactly why I think gel sites it wasn't the right, you know, the right partner for meta, because we already know how to make products like these, we have an existing business, you know, we know how difficult it is to do in volume and the repeatable way. So yes, we did have to learn on the spot, really quickly about the meta design, and then how to improve, you know, some of the manufacturability issues that we encountered, as we were always rebuilding them, but that that was something that was expected. But this is also where Joel site expertise is, we, you know, we we had some quick cycles of learning that we did in the early phases, when you build the first ones, to make sure that we have a pretty high yield and building this gel material for digit, which mean pretty
(0:23:59) Audrow Nash
high yield and building this Gel it well, like, you have to manufacture gel, and then you can cut it up and you want it to be I don't know, get a lot of it at once or what. And some of it has quality issues or
(0:24:11) Youssef Benmokhtar
Well, you know, like like anything like any any part of a product that you that you make, you know, need to pass, you know, some some specifications. And, you know, this gel material, which is kind of the secret sauce, you know, that we have, and we build it for different applications, you have to make sure that it has the right optical density that you want that it's cut and, you know, in the right shape, and that the coatings are, you know, opaque, like we talked about before to make sure that the light uniform uniformity is going to be good that you don't have, you know, an unacceptable level of cosmetic defects that will affect image quality, you know, those kind of things. So, this is what we're, you know, experts in and it took, you know, took us a few cycles Learning to get right so that we can start making hundreds of these are 1000s of these.
(0:25:07) Audrow Nash
Okay, so it's interesting. A would expect that that's the case with the challenges of making a lot of them. Were there any particular things that came up that, you know, you had to change? From the DIGIT design?
(0:25:23) Youssef Benmokhtar
You know, I it, candidly, the biggest problem we had were components supply. You know, in these days, you know, like, I'm sure you've been hurt hearing around shortages. And what was the challenge, I think for both, both parties actually was to find replacements compared to the digits original component choices, just because of supply issues. So I would say the biggest challenge we had was honestly, around that is actually, you know, finding alternative chips and things like that, qualifying them and getting our hands on it, honestly, has been the biggest challenge.
(0:26:03) Audrow Nash
What an interesting thing. So you had to like, find it called for a certain part and you had to swap it in, use something else and validate that it did the exact same thing,
(0:26:15) Youssef Benmokhtar
correct? Correct, then, you know, and develop the firmware because, you know, the original firmware was meant for another chip, so you have to make some modifications and that kind of stuff. That honestly was the biggest challenge. The rest of us, okay.
(0:26:28) Audrow Nash
Okay, so, just and I haven't I don't think we've said DIGIT is like, how large is DIGIT? And what's it? What's its case? Basically?
(0:26:40) Youssef Benmokhtar
Oh, I think it's about, say, three quarters. Yeah, it's a little over an inch deep. I think it's 28 millimeters, if I remember correctly. Just over an inch. Yeah. And I think I wouldn't need to get a specs on that. But I want to say it's maybe half an inch, and little bit over maybe three quarters of an inch in width. Anyways. Yeah. And then in height, or then it's probably another inch in height. Yeah, slightly below that. Probably. So it's
(0:27:13) Audrow Nash
kind of like an extra thick human finger. Correct. In terms of size, and it has a little pad on it. That has the gel.
(0:27:24) Youssef Benmokhtar
The Delta data, yeah, the
(0:27:26) Audrow Nash
gel material and the gel material. It's kind of it's like framed by the case material. Correct. So you put something on the gel material. And it will measure the deformation there. There's the edges on the same side as the gel.
(0:27:44) Youssef Benmokhtar
Correct? Yeah, there is a there is a bezel I think the bezel is like one and a half or two millimeters. I know fairly thin. Yeah. And the the gel itself is you know, I forgot what it is like three to four millimeters probably high. It kind of a slightly dome, like, you know shape just like your finger would. Basically yeah, yeah.
(0:28:10) Audrow Nash
Yeah, like the tip like the last DIGIT of your finger. Correct kind of thing.
(0:28:16) Youssef Benmokhtar
Yeah, you know, what's another thing maybe that would be interesting to you. The way I look at these tactile sensors like DIGIT is almost thinking about having a camera inside your finger. So imagine if you had a camera inside your finger and it would actually be able to see anything that the forms your skin as you're touching things. That's pretty much what DIGIT is. Yeah, you know?
(0:28:42) Audrow Nash
Uh huh. So just so I can picture the internals of it a little bit so you have the gel surface? Yep. And if so, then we have the camera and the camera could be so if the sensors facing down and the gel is at the bottom, the camera is above looking at the gel Correct? Do we have it does the camera touch the gel? Or is it just like a sensor pad that touches the gel or how do we get our
(0:29:08) Youssef Benmokhtar
know the camera is is offset from the gel there's a the gel is mounted you know think about it mounted on a like a plastic window if you'd like okay and the camera is behind that I guess small distance you need to have a minimal you know distance between the camera and the
(0:29:25) Audrow Nash
yeah gel. I was wondering if you could like use the gel as a lens or something like this but I imagine because it's not like a pinhole camera you would get a lot of light going and bouncing all over. So be very it'd be like a very blurry image as opposed to a crisp image kind of thing. So you need that separation as opposed to not having a lens or having a sensor pad directly into the gel.
(0:29:50) Youssef Benmokhtar
You need to be able to image that area right so you need a little bit of distance to be able to see that surface.
(0:29:58) Audrow Nash
Now for robotics just like wondering, how do you connect to it? How does it? How does it work as a sensor? What exists for libraries? Like all these things?
(0:30:09) Youssef Benmokhtar
Yeah, so, you know, so connect connectivity is done through a USBC type, you know, of connectivity. And then, you know, you basically have
(0:30:19) Audrow Nash
high data flow or something through USB C, or was there any reason to choose it other than it's a nice ports, the
(0:30:26) Youssef Benmokhtar
most common port, and also, you know, robotics actually does care about real time information. And so you can run it, you know, easily at 30 frames a second, you know, you know, it has even the capabilities, I think, should run a little bit faster than that. But you want to have real time and USB was was, is basically a good way to do that. And then you basically, you know, you know, program, the sensor using all the firmware that meta has made available on PI touch. So that's how basically, you, you connect it to your your computer, have access all the Python libraries, and you're ready to go.
(0:31:11) Audrow Nash
What, so I don't I don't know too much about the available libraries for the what do they let you do? Like, you assume you can get 3d deformation, but is there like, I mean, there's for vision for computer vision, there's like open CV, or open computer vision library, I believe, started by Intel. And it's a big open source library has been around for a long time. And so that lets you do a lot of basic computer vision things. So if you're working with a camera, like almost right out of the box, after you figured out the install, you can like detect faces, or do object tracking or something like this, is there any such things that let you kind of identify what you're touching or identify properties of what you're touching? Yet, and these libraries,
(0:32:01) Youssef Benmokhtar
you know, I think the meta team is, is kind of adding things on a regular basis, but it's all the basic things you get are, for example, you know, the ability to just, you know, see an image, you know, so even even a 2d interpretation of the image or 3d representation of the image, see where the sensor has been touched in. So if you look at the surface area of the sensor, which area has been touched, you know, which is an interesting information, you know, in robotic application, that they typically called pose estimation. And from, that's the kind of the very basic information that they give, you know, the purpose of mirrors, as far as I understand it, is to then in turn this into creating datasets of deformations of that gel based on the type of objects that the robotics hand or gripper is, is manipulating. So, you know, you kind of train models to say, you know, let me grab this example, if you say, this is, this is a cable, right? And you basically have your robot kind of holding it in all kinds of different, you know, orientations, and different forces and things like that, and all that is captured in their, you know, in their datasets. And it's used to basically, you know, create AI to basically say, Well, this is how I recognize that this is a cable, you know, touching the sensor? And then can you do things like that, for example, like, you know, reorienting that cable and space, because you want to move it from A to B, you know, those kinds of things. So, I Yeah, you know,
(0:33:46) Audrow Nash
to me, I'm saying it's similar to like, image segmentation, or something like this. So we do all of the, when we had like image net, or some sort of thing that has a bunch of labeled images. So we have cameras, the cameras take pictures, and then we say, Okay, that's a cat, that's a tree, that's what whatever. And then you do that a lot of times, and then maybe one approach is you can have a more complex image, and you can identify the different parts that are in it. And I think now they do that typically with like bounding boxes, that they just slide over the image or something, to try to figure out where in the image might be a cat, but they're getting more and more sophisticated. And I'm definitely not aware of this, but are aware of the like cutting edge of this research. But it seems kind of like that, where the intention is to start with very simple identifications of things. And then perhaps eventually a robot can grab a chunk of stuff from a messy, messy desk and identify that it might have a few cables and whatever. So it could look for things or
(0:34:48) Youssef Benmokhtar
I think you're right, I think, you know, the Holy Grail is actually to be able to provide that kind of intelligence to robots the same way that you've you've mentioned about image recognition, you know, forecast And dogs and other things, right? It's kind of the same idea is, you know, if I want to be able to do complex object manipulation in a robotic application, or you know, to do, you know, kidding, right, and other things like that, what to do kidding, for example, if you want to, you know, take, you know, you have a box of a whole bunch of different objects, you don't really know what those objects are, but you want to basically put a certain class of objects and put them in another box, you know, but this time classified by the type of objects. This is something that actually very difficult to do. The word Kidding, kidding. Yes? Like a kit?
(0:35:40) Audrow Nash
You No kidding. So like you're assembling kits, correct thing? Correct. Okay. I'm not heard that before. Yeah. So if you're, if you're grouping objects by type or something, for example, I, yes, go ahead.
(0:35:53) Youssef Benmokhtar
You know, the other thing could be if you're trying to just assemble things that today, I really only done by humans, because humans have the kind of dexterity you need to do those kinds of assembly of complex assemblies, you know, handling, you know, cables, or handling, you know, types of fibers and things like that are just, you know, really hard to do for a robot, you need to have that kind of sensitivity feel that humans have. So providing this kind of perception capability to robots may allow them to start doing some of these more complicated tasks.
(0:36:32) Audrow Nash
Gotcha. And I imagine that it's not terribly interesting to be like, Okay, now we can grab this and identify it as much as it is that you can use it in like a sensor fusion kind of context where you say, I think that I'm grabbing this, and I grabbed something. Does it agree with what I expect to feel given this? I think that would lead to more interesting applications of this kind of thing.
(0:36:58) Youssef Benmokhtar
You're absolutely right. You know, one thing that's really interesting is that machine vision, and computer vision is extensively used in in robotics today. But one thing that happens is, especially in a robotic hand, for example, is once the hand grabs something, he can't see anything is occluded. It's no idea what it is, right? So I think complementing vision, with tactile information, is something that's going to be extremely useful.
(0:37:27) Audrow Nash
Yeah. Now, so one thing that I'm wondering as a roboticist with this is, so say, I want to like screw in a screw with this. So I want to pick it up with my fingers, and I want to screw it in. So maybe I can pick it up and identify it. But I think that highly like I'm imagining the gel is quite soft for this. And that deaf formation might make it hard to control whatever you're picking up. Are there? Is this kind of it? Would you think that this is correct? And are there strategies for using these sensors with tasks do you like, identify it, and then move a little lower and then to factor or higher, so that you can identify what it is and then pick it up with rigid grippers that you kind of know what you have? Or is it is it more complex and how you grip it.
(0:38:20) Youssef Benmokhtar
So, you know, first of all, you know, the gel that we use in robotics application today is actually not that soft, at least not to the human touch is actually it would feel, you know, it will have this kind of elasticity feel to it, but it's still pretty hard to actually, you know, if you're grabbing a, you know, a screw, for example, as you as you mentioned, you would actually not really need to, you know, pass it on to another robotic arm. To do something else, you actually will have a very good understanding of what the screw is how you know, how hard you actually grasp, it will actually give you more information about it's about its shape, we actually have done some experiments that you might find interesting to us are actually tactile sensor as a hardness a relative measurement of hardness, because same similar to, to human skin, right? And human finger, if I'm pressing on something that is really, you know, soft, you know, I really don't have a lot of feedback on my, on my skin, it's actually very, very light. But if it's really a hard material, I my my my skin will get indented really fast. Well, it's the same thing that happens to us with our sensor. So when you're holding an object and actually pressing it, pressing it against another, it actually also gives you gives you that that really interesting information about the potential hardness or softness of the of the object that is touching. So it's really look
(0:39:59) Audrow Nash
at the deformation of both objects, the gel, and the objects are kind of infer the hardness of the object is
(0:40:08) Youssef Benmokhtar
it the jail by itself would tell you how hard the object is that that whatever you're holding is in contact with, you know, just just by, again, you know, just imagine we have a video, I think, on our libraries of a gripper holding a fork. And then we have hardness samples. And as you push the fork into the harness sample, you see how the tactile sensor deforms, depending on the hardness of that sample, and the hotter it is and the more deforms, right? So this is actually how you get a relative hardness measurement.
(0:40:45) Audrow Nash
does this require that you have 3d knowledge of whatever the object is that you're doing? Or at least you know, like, say the width of it, or how far it is between the two sensors are? How much of it should be pushed into the sensor?
(0:41:00) Youssef Benmokhtar
Um, I think 3d helps, but I don't think it's even necessary.
(0:41:04) Audrow Nash
Oh, it's, I think, intuitively see how it works. So
(0:41:08) Youssef Benmokhtar
well, because you know, all I really care about is, you know, for example, is the depth of the indentation. But not necessarily the exact shape of the object, you know, is how deep do you get into the gel their formation, for example. But if you have the shape, then you can also say, oh, now I'm starting to recognize that that's the tip of a, of a fork, for example. Because you can add that to object recognition.
(0:41:33) Audrow Nash
So you think if we're picking up like a screw or something, and we have, say you have the two digits, or whatever would be next after DIGIT with higher resolution, and you have it between the two sensor pads, between the gel on the both of the sensors. Now, you can kind of see how it is pressed into both of the sensors. And you can infer in 3d how the screw is oriented, correct with respect to the sensor, and then from there, you can figure out how it's oriented with respect to the rest of the robot, because you know, how the fingers are positioned with respect to the rest of the robot, this kind of thing?
(0:42:10) Youssef Benmokhtar
Absolutely, that that is possible today, we actually have, we're doing that at a job site, we, we build, basically, a 3d point cloud model of an object that the that we want to our grippers to, to manipulate. And based on just the contact area, we actually able to infer the orientation in 3d of the of the 3d Point Cloud, which is, which is a really cool thing to do. Yeah.
(0:42:43) Audrow Nash
So when you say point cloud, you mean, mesh, kind of thing. So it's not you're using your sensor to get the 3d information. And it's creating like a mesh representation of the object that you see. And like a 3d graphics or not, I was thinking LIDAR initially, which you get all the points, and then you can
(0:43:01) Youssef Benmokhtar
Yeah, it's not Yeah, I think a mesh is a good, good explanation to it for it.
(0:43:06) Audrow Nash
Gotcha. Just making sure I understand.
(0:43:08) Youssef Benmokhtar
No problem. Okay.
(0:43:11) Audrow Nash
Then, so, you know, what, meta or formerly Facebook? Like, so I know, they had the one robot that would turn and look at you. And it was used for like video calls and things like this, like an echo with a neck kind of thing. Amazon Echo with a neck? Do you know what their interest is in tactile sensing and find manipulation? Are they going to get more into robotics or any ideas here?
(0:43:41) Youssef Benmokhtar
I think it will be a bit of a relation of mine. Well, it's definitely started as research. But I think the the idea of providing this intelligent perception tool to robots is what is compelling to meta, as you imagine, you know, robots being able to do more and more tasks. It became obvious to them that it needed to have additional sensing capability. And that vision and sound was just not enough for robots to be able to, to do the kind of complex tasks that the human does. And, you know, I think, you know, in their very grand vision, I believe that Robotics has a place in their Metaverse story, in the sense that, you know, you have to imagine a
(0:44:38) Audrow Nash
verse story, you mean the kind of 3d world they're creating?
(0:44:41) Youssef Benmokhtar
Correct? I think, you know, you know, I look I define, you know, I have my own simple definition of Metaverse is really basically the, you know, the the integration of the digital and the physical space into one. And to do that, you know, you want to have the ability to replicate or duplicate any task that happens in the, in the physical world in another world. And robotics is part of that other world, although it is a physical object, or equipment, you know, you want it to be able to replicate what humans do to provide, you know, services just like a human would do. So if you if you imagine, for example, doing remote collaboration through a robot, you know, or somebody who's coming to your, you know, a robot coming to your house to do house cleaning, or to, you know, be your, you know, the person that's actually bringing you water, if you're in cabinet incapacitated in bed or something, because you're sick or something, well, if you don't have those capabilities, I don't think you're going to be able to perform those kinds of tasks like a human would. So and that's in that perspective, I think this kind of research is probably very attractive to them in the long run.
(0:45:58) Audrow Nash
Is there is there work on kind of the other side of this where it's, so right now your sensor allows you to get really accurate 3d deformation information, so you can push an object into it, you can see the surface I like the videos, or the pictures are amazing. Like, where you see a fingerprint? Do you see the $1? I think I think I saw that somewhere where you see like a $1? Or if you put it on $100. Bill, yeah. And see the press throughout the like, that's crazy. Yeah. But is there is there work in the other side of this, where, okay, we have this detail 3d information, can we display it elsewhere? This kind of thing. So like, if I push into it, and I feel like like, say, my finger drags across a larger one of these sensors, can that be replayed in a 3d space some other way?
(0:46:52) Youssef Benmokhtar
So I think you're, you know, it's
(0:46:55) Audrow Nash
outside of what your company does with Joe?
(0:46:57) Youssef Benmokhtar
Not completely, because I think what you just described is, I mean, to me and tell me if I if I understood correctly, what your question is, um, you know, I look at tactile sensing being the, you know, the the input to another output, which is haptics. Right? So what you just described to me is, okay, if I, if I know how it feels right to touch it with my sensor, I give you a digital signature of that texture of that feeling. Right. So what do I do with that? And I think once you do with that, one of the things you can do with that is actually that becomes the input that you need to have true to life haptics experiences, right? Because if, you know, eventually, one day haptic technologies would be just as high resolution as our sensor technology. And they've been in and they're gonna need right to know, well, how do I? How does woodgrain feel? How does cotton feel? How's that silk feel? How does glass feel. And I don't see how you get that information. Without a sensor like ours, which gives you millions of pixels of information to derive a digital signature of any texture out there. So when you think about the need one day, to have a digital twin of all the surfaces in the world, all the feeling the textures of materials in the world, you're going to need to characterize them through a digital tactile sensor. And that's going to be very useful to people working in haptics, in my opinion,
(0:48:32) Audrow Nash
the combination of those, so if you if you go from good texture sensing to haptics that makes me that to me with meta, makes a lot of sense, it would be kind of like, Ready Player One or something like this, where you have like, sensations and stuff.
(0:48:53) Youssef Benmokhtar
Yeah, I think, you know, we're, what kind of the microphone to the speaker, you know, if I use the sound analogy, right, okay. And I think in our world, you know, where the input to haptics and I think that's, that's just a simple way to look at it, you know, where, yeah, where the camera work with the camera to the display. You know, that's, that's, that's how I look at our company.
(0:49:13) Audrow Nash
Yep. And so now, you mentioned at the beginning, how the digitization of touch, can you tell me a bit about kind of what you believe are the implications of this and how important it will be and how technology will go that way?
(0:49:28) Youssef Benmokhtar
Well, I, we believe it will say that honestly, that the next sense to be digitized this touch, vision was first and we know what happens once CMOS sensors became, you know, broadly available, not only obviously, it helped to and
(0:49:44) Audrow Nash
that's what they're made out of, what is it complementary metal oxide? I don't remember. Yeah, I
(0:49:52) Youssef Benmokhtar
remember, I think
(0:49:53) Audrow Nash
so. metal oxides, so that's what those are made
(0:49:56) Youssef Benmokhtar
of. Yeah, but those digits basically it translated photonic information to electronic, you know, an electronic signal. And, you know, and today it's used
(0:50:09) Audrow Nash
to get digital images as opposed to like, the old, I don't know, film roll, correct. Like,
(0:50:15) Youssef Benmokhtar
they're really original, you know, you know, picture taking technologies, you know, that that, you know, with with the, you know, the silver oxide things and you know that Kodak was, you know, famous for for a while. But so we digitize that, digitize that, and then we basically make copies, yeah, you can make. So first of all, you're able to miniaturize it, make it, you know, widely and broadly available to everybody be able to take pictures, videos, and so on. But most importantly, it actually allowed computer vision to take off because now you have access to the, you know, billions and billions and billions of images to be able to derive, you know, information from using machine learning and deep learning techniques. You know, and then audio went through the same revolution. You know, without digital audio, there would be no speech recognition. And, you know, no, Alexa, no, Google now No, Siri, you know, no music streaming, you know, like, no, no, Spotify or anything like that. No Apple Music.
(0:51:13) Audrow Nash
We'd ever record player and that'd be it.
(0:51:15) Youssef Benmokhtar
Yeah, which, which I liked a lot to the handle. Exactly. It sounds really good. But but none of these other things would have happened, because now we have, again, you know, the, you know, trillions of hours of sound and audio bits that can be used for algorithms to train and respond to the next step
(0:51:34) Audrow Nash
rise. I'm surprised that video or images came before sound. To me, it seems like sound is simpler, because it's one dimensional, as opposed to images. But, I mean, I guess we've been interested in cameras for a long time,
(0:51:48) Youssef Benmokhtar
well, you know, the human, the human brain, you know, 30% of our brain activity is used by the visual cortex, right? So, I'm not surprised that we were more attracted to sound to to audio to video, sorry, then. But, but I, I think the next one is really touch, we are multimodality creatures, that, you know, for humans to really do what they do best they need all of their senses. And, you know, the next one that that I think is ripe for digitization is tactile, or touch, touch and feel. And I think it's going to open up a brand new type of, of, of applications, you know, starting with, you know, with robotics, but also some of the other activities that we've been doing a gel side and, you know, in understanding surfaces and features of surfaces for years. Now,
(0:52:44) Audrow Nash
you want to mention some of the other applications that use this technology.
(0:52:47) Youssef Benmokhtar
Sure, you know, I think at a really high level, we have found that we are really nice fit whenever we count an application that has been using actually human touch. So believe it or not, in aerospace, for example, there are still some companies that are using the fingernail test test to decide if a scratch on a part is a problem or not a problem basically, meaning that it needs to be reworked or tossed. By basically just finger meant notice, well, basically, you put your finger in the scratch and you actually feel how deep it is, like you say, Well, yeah, that one that think is too deep, we need to do something about it. That's crazy, it is crazy, absolutely crazy. But for us, you know, using our sensor, they able to get now repeatable, accurate, fast measurement, quantitative, not subjective, and you can make decisions right away that are the right decisions every time. And our, our technology is the only one that allows to do that, because vision is not very effective on metallic surfaces that are, you know, hard to take pictures of it's very highly reflective, and so on again, you know, definitely yeah, but the the beauty of art technology, because it's such the same way as human touch. It doesn't really care what the surface properties are, whether it's like, you know, glass metal, you know, you know, transparent, translucent and we don't care reflective, you know, we you know, we wire sensor with the foam to any material, it doesn't, it doesn't really matter. So, this is where we have to put the seen success and our in our initial commercial efforts have been around providing measurement capabilities to companies that are interested in understanding specific features in their applications, like scratches and dents in aerospace or, you know, measuring coatings in the paint shop in automotive, I mean, it's, it's basically anytime that you have I think human touch being used or where, where basically using a traditional camera system just not going to work because of the type of material that that we're dealing with.
(0:55:05) Audrow Nash
Yeah, shiny ones. Exactly. Okay, yeah, that'll be cool if touch becomes digitized. And then we have a way of communicating, and then it can maybe enable all the haptic stuff that we were speaking earlier. Correct. Really, really cool.
(0:55:21) Youssef Benmokhtar
Yes, I've tried many of the current haptics technologies. And it's just a very disappointing experience. It's basically, you know, the vibrations you feel, regardless of what you're supposed to be touching in a digital form. And that's, that's, I think, past the first second of Oh, wow, I'm feeling something, you actually just say, Yeah, but that's not how it's supposed to feel. And they have really no other way. I think today, you know, all I've seen is varying the varying the frequency, or whatever technique they
(0:55:56) Audrow Nash
do with a pen or something, and it will you drag it across a surface, and it kind of spoofs by putting forces,
(0:56:04) Youssef Benmokhtar
for example, for example, if it's, if it's haptic glove, you know, sometimes they have, you know, kind of vibrating membranes on your tip. So you basically change the vibration, you feel that you're touching something.
(0:56:17) Audrow Nash
It's a similar approach to like how sound works, right? Because I mean, we, with the digital sound, you represent different frequencies, and then you combine all those frequencies together at their respective amplitudes. And then you get the appropriate sounds. Kind of like that. They're emulating that but in a touch space to try and you're saying at the current moment, it's not very convincing. No,
(0:56:40) Youssef Benmokhtar
it's not the resolution. And, and it's not actually matching what it's supposed to be.
(0:56:46) Audrow Nash
Do, are there any technologies that you know of in this space that are promising? Or maybe we get the really good touch information? And then that'll help us have that? I don't know, we'll use that good information to build better emulations of what they look like this kind of
(0:57:04) Youssef Benmokhtar
thing I'm not aware of I mean, I'm aware of a lot of companies working on it. So I'm sure that progress will be made. It is a difficult problem. I haven't seen anybody who cracked it yet. But I do hope that the digitization of tactile sensing will help haptics companies because it will provide them with, you know, a library of how things should feel. And I think that's, that's an important part of it.
(0:57:30) Audrow Nash
Oh, yeah, that'd be nice, though. It feels like so I'm not very aware of much research in the haptic space. But a lot of it seems to be like, can we emulate this texture? Or can we classify this texture in a good way that we can possibly emulate later, and this kind of thing, and if you have really good information there than that, I mean, especially like, if I'm a researcher that's trying to do haptics things. And I go use one of your sensors and generate a whole library of what like, say, a woodgrain looks like and then from there, I can randomly sample or, I don't know, build a model based on all that data that I just took myself, which is really awesome.
(0:58:08) Youssef Benmokhtar
Right? And you know, what, one thing that I think also, you know, might help to accelerate, you know, the adoption of digital touch, I think, is also the the use of this digital understanding of a surface. For applications like, you know, let's say, you know, I'm a textile company, I've designed something. And in the US, I'm having it made in Southeast Asia. And one way to verify that what they've made is exactly what I want is actually to compare the digital signature of it. Oh, that's super cool, right? Which today, they don't really have I think the only way they really do it is by exchanging samples.
(0:58:54) Audrow Nash
Hmm. Right. You're saying like, if I want something from some company, and I say it has to be made to my specifications, and they say, Oh, I've done it, you can say, Oh, I've padded it everywhere. And I see that the model is, I don't know, not within tolerance in this one dimension or the wrong shape, or whatever, it might be something like that. So you can verify
(0:59:14) Youssef Benmokhtar
it, you would verify the texture. Exactly, you know, it does the texture match what I designed, right, so I'm designing it here, I want the texture to feel this way. Here's a digital picture of what that texture should feel like you know, build it and then measure the digital signature on your end and see if it matches
(0:59:32) Audrow Nash
be cool if this was also used for like counterfeits and things for like say like a high end clothing brand. They're like if you press your device on it, you can see the quality of the fiber versus polyester version that looks similar thing like this, cuz I imagine you could look and see the differences.
(0:59:52) Youssef Benmokhtar
You're absolutely right. We actually had inquiries around exactly that front because we well, yeah, because the thread counts you know is different. potentially counterfeit material, you know, counterfeit. Shops are becoming extremely good. It's basically becoming very, very hard to tell, you know, to from fake. But one way actually, is to get a digital signature, what it's supposed to be like, and look at it. Yeah. So you know, it is part of our vision, you know, it, you know, for us, if we could find a way to build a product and the software behind it, to provide to all of the custom offices out there that wants to verify, you know, is this real? Or is this fake? And they basically could scan the materials and say, compare it to their database and say, Well, this is how the real one is that that is just a, you know, a much more efficient way to do it.
(1:00:48) Audrow Nash
That's super cool. Do you also, I'm imagining applications in healthcare, in this kind of thing, like, Oh, I think I have a lump here for something. And you could press it into it, and you could maybe get good 3d information about what you're seeing, and then what that means for this kind of thing.
(1:01:08) Youssef Benmokhtar
Absolutely. And, you know, we, we've had, again, in his in the history of GelSight, we've had inquiries for such applications. Yeah, we haven't really pursued them at the time. You know, for a small company, you have to choose where you want to spend your
(1:01:26) Audrow Nash
money for you guys.
(1:01:27) Youssef Benmokhtar
We're only, you know, we're 20 People just about today,
(1:01:32) Audrow Nash
and we're How do you do most of your money spent up until this point where you're doing digit, which is, if I understand correctly, it's like the first fairly low cost sensor that you guys are making. So spend low volume high margins on how you've been operating. So extremely, NEC, probably assemble in house, and this kind of thing.
(1:01:54) Youssef Benmokhtar
So I would say in our chemistry, type of works, I think around the gel is done in house today. But we have started to outsource some part of that process. And then the electronics assembly piece is outsourced. So we don't we don't do that in house. So we you know, we've been preparing ourselves for scale. And you can't do that by, you know, just assembling that in our office in Boston. So yeah, we definitely we've already started to build that supply chain.
(1:02:25) Audrow Nash
Interesting. What do you how do you imagine growth looking for you guys? Like what do you? What do you think in the next like two years, five years, this kind of thing? I know, it's a projection of something that hasn't happened. But like, what do you think
(1:02:40) Youssef Benmokhtar
our focus in the next couple of years is, we just launched our series two metrology products. So we're kind of putting out commercial efforts on the deployment of that solution, which we just launched this October. And then we basically planning to push the commercialization of DIGIT obviously, which is, you know, something that is important to us, and then this next generation local sensor that will hopefully have in the first half of next year, but the thing
(1:03:13) Audrow Nash
you were saying that would be more accurate, smaller, correct, then this kind of thing, okay,
(1:03:18) Youssef Benmokhtar
great, because, you know, the thing is, we need to, we need to create an ecosystem around tactile sensing. And that's, you know, again, the shared vision with with metta and, and to do that we need to invest in bringing products that are in that kind of cost points and ease of use, and form factor, and also invest in marketing, you know, to make it known. And that's what we basically we're focusing the next, you know, six months to a year is let's, let's make tactile sensing. available, that everyone knows that everybody knows about it. Everybody who has any kind of curiosity about what can I do with this? We want to remove all the barriers, all the obstacles and make it easy for them to get it.
(1:04:04) Audrow Nash
Yeah, I'm seeing so I'm seeing DIGIT where you guys similar to what was it? Is it Leptin, the FLIR the thermal camera, where it's like they made the I think it was the FLIR and it was the fairly low cost thermal camera. Like I don't know, it was a few $100 where previously it was they were very expensive. So are you are you following kind of a similar business model? You think at the moment with digit
(1:04:35) Youssef Benmokhtar
Yeah, I think it's actually makes me smile that you're mentioning the laptop camera. I actually worked on it in my past life. Yeah,
(1:04:46) Audrow Nash
I worked with it not on it.
(1:04:48) Youssef Benmokhtar
Yeah. So yeah, I think I think it is about building that community. You know, we we want to make it available. We want to make it affordable, but at the same time we need tactile sensing to be, you know, used in real applications by customers that give us that market credibility, right when I, when I work with a company in aerospace that is actually saying, using your technology saves me hundreds of 1000s of dollars a year in, you know, reduced rework costs or maintenance costs. That is very important for Josiah, because he basically said, tactile sensing is not only about a long term dream, a long term vision, it actually actually has applications today. So, we are continuing to work hard at having our sensors being used in real applications. But at the same time, we want it to be, you know, we want to create that that community, that hopefully is going to be very, really large, and in the years to come. That will consider the use of tactile sensing in their future products. And to do that we need to continue with what we started with digit.
(1:06:05) Audrow Nash
What kind of software push are you guys doing? So I know that meta AI is contributing a lot in the software space, it sounds like, but are you guys maybe making libraries that expose the data type and make it useful for a lot of common applications, a lot of common hobbyists or makers, this kind of thing.
(1:06:28) Youssef Benmokhtar
So for the for the, I would say, the low cost part of our portfolio, we we intend to follow the metal example, and make our make libraries open source make them easy to access?
(1:06:42) Audrow Nash
Does anything exist yet? Or it sounds like it doesn't yet
(1:06:45) Youssef Benmokhtar
we have a few we already have our you know, our 3d reconstruction. technique, you know, can be available for people to leverage on our sensors, our pose estimation, you know, algorithms also, we have, we're thinking about other things, you know, we have anti slip, for example, for robotics, which could be interesting, you know, detecting than an anti slip, yeah, I was gonna ask about that, right, you know, detecting that something is slipping from grasp, from your grasp, you know, so these are all the things that we, you know, can put available for community to use. So, we have a few, but, you know, we, we need to accelerate that and as part of our plans. So on the low end side, I would say, we're going to follow the open source model, we're going to try to make it everything that we develop, make it available for the community.
(1:07:40) Audrow Nash
And part of that is growing the community, correct that gotcha, correct. So it's interesting that that's a pragmatic choice, which is quite cool. To me, as someone who really likes open source, that it's kind of a pragmatic choice well to do open, it's very
(1:07:53) Youssef Benmokhtar
pragmatic in the various, you know, in the sense that, that that ecosystem today does not exist, it's very small. So, you know, for us to grow it, it would be, I think, it would not be smart on gel site to make it difficult by having a closed source system, you know, or make it, you know, Honorius, you know, for people. So, open source is the way to go to create a community, and we are definitely embracing it. Now, on the higher end, you know, type of product, there we are, you know, continuing to develop applications. And those are soft, you know, including our software suites that customers are, you know, paying for, whether it's with the purchase of a of some of our systems, or as a just a, you know, an applications we just like you do with enterprise software. But even then, you know, even there, we are planning to also invest in, you know, machine learning, and AI to bring in more services to our customers in a future where, instead of just offering a tool that provides them with the decision making capability at the instant, is also the ability to leverage the data, the data that's coming out from the sensors over time. So as you're building a database of images, in an enterprise environment, do something with those images over time. And we think that machine learning and AI can can help us bring more value to our customers. And that's another part that's going to be important for Joseph going forward.
(1:09:31) Audrow Nash
Do you mean keeping it for yourself or putting it out into the community? And so you get a bunch of data, and you can train machine learning models on it or whatever it might be? Will it be like you'll release a public data set? Or you'll keep it in half?
(1:09:49) Youssef Benmokhtar
They will do both, I think on on the consumer like, you know, market, you know, we, you know, we will likely publish some public datasets, you know, to help how the community leverage those datasets. But in the enterprise space, when you know the problem to solve, you know, there, there is a way to just monetize directly that data set by building services.
(1:10:11) Audrow Nash
Do you think? Are you guys do you think? I mean, so many questions around all these different things, because the software and open source that is really interesting. But so one of the things that really kicked off the all of the advancements in computer vision, as far as I understand is like the release of image net. And I think it was, I think it's image net, and might be some others thing, but it was a big data set with benchmarks effectively, for you can compare how good your algorithm does. So you have a bunch of images, those images are labeled, and it asks you what is this, and then you can run your algorithm on it. And your algorithm is right, say 60% of the time and the current state of the art is 70% of the time and you've thought you know how what you're doing compares, do you think it's in your future to create anything like this for a touch?
(1:11:03) Youssef Benmokhtar
I hope so, I really hope so. Be awesome. It's a it's a very ambitious goal for EDA for a company of our size today. But it is part of our story. And it is part about how we want the company to grow. And, you know, we we believe that that is something that needs to happen. And we hope that we'll be the ones to, to build it.
(1:11:27) Audrow Nash
Yeah, if I remember correctly, ImageNet was done by Fei Fei Li, and I think she was at Stanford. And so they just received a big grant, I think, and then went and did it. So I don't know, something like that. I may, I may be wrong about the story, or who were anything, but it'd be quite cool to see you guys do a touch library that would really accelerate this whole space. Are you doing anything? So I mean, I met open robotics. I'm wondering, do you have ROS? Like, do you have any way to interface yourself? With Yeah, I should. I should have mentioned that before it or sorry? No, actually, our sensors and, and our software and firmware is ROS compatible?
(1:12:14) Audrow Nash
Suite? So which ROS by the way? ROS 1 or ROS 2? I don't know, that's a good question. I'll ask my robotics engineer. But but it is absolutely our you know, that we, we intend to support, you know, anything that we need to, to, to, again, to accelerate adoption. So ROS is definitely something that we keep keeping an eye on and supporting.
(1:12:39) Audrow Nash
If you one thing that would be cool. If so as you are working, because if robotics is a big interest, and a lot of the robotics community uses ROS, one thing that's very nice to have is something standardized in the ROS community. And the way to do this is through a ROS enhancement proposal, it'd be very cool to create one of these for like a data structure that's really good for touch. But actually, maybe it's just an image, which is kind of nuts, or like a 3d point cloud or something like this. But if if it is a non standard datatype creating one of these ROS enhancement proposals would be really cool. And then we have this as like, this is the de facto touch type that will be used in the ROS ecosystem. So other packages can also use it and build off of your standards. I'm making a note.
(1:13:40) Audrow Nash
Awesome, that would be super cool. And I'm, I'm happy to see that you guys are already supporting ROS even. I bet. I wonder if it's ROS 1. A lot of people are moving ROS 2, which is quite cool. It's getting steam.
(1:13:53) Youssef Benmokhtar
I can I can ask right now. I don't know if I get an answer right away. But
(1:13:58) Audrow Nash
I mean, it doesn't matter too much. Yeah. Let's see. So now I want to see how much time we have. So we'd like another 15 minutes or so. I was the one thing I was really curious about your sensor was vibration. And so a lot of time or as far as I understand if you have like a gripper and you have an object in it, and the object is starting to fall out of the gripper, it'll create vibrations, so you'll be able to sense vibrations or something to know if the object is slipping, which you've mentioned. Yes. And I was wondering with the gel, how this would be like perhaps if the gel deformed and then slowly came back, it would be very hard to get vibrations or if it was very springy, it might be easy, or you might get a bunch of noise if it's to spring back. I don't know. Can you talk a bit about the slit of different objects through your sensors?
(1:14:59) Youssef Benmokhtar
Yeah, I think you know The really I think what we do is we have come up with a way to measure the local forces at the sensor surface. And, you know, based on that you basically, you know,
(1:15:17) Audrow Nash
you just assume that it's normal to the deformation. Yeah. So why don't formation surface and the force? Correct. So
(1:15:23) Youssef Benmokhtar
what we do actually, we imagine, like you printing a dot matrix on the, on the membrane of the sensor, right?
(1:15:30) Audrow Nash
I don't know what you mean, dot matrix,
(1:15:32) Youssef Benmokhtar
it's basically a matrix of black dots. Right? Okay. So
(1:15:36) Audrow Nash
that's what you mean. Right? Yeah. And thinking like that product, like in math or No, no, yeah.
(1:15:42) Youssef Benmokhtar
It is very, you know, yeah, just exactly what what it sounds like, a bunch of dots. Yeah. So now, when you have a camera, right, every time that you press on the gel, those those dots, you know, location basically get get changed based on how the touch the surface is being, you know, being touched. And based on that, I actually have now an idea to, you know, to know, what is the local force vectors basically, on the entire matrix. And as you're moving across the surface, on also scan, detect, basically, the changes in that local, those local forces. So if you're, if your object is starting to slip a little bit, you know, then you basically see the vectors kind of all going into one direction, with a relative intensity also showed, and you put all of that together, you're able basically to detect that, you know, the object is, is actually starting to just slip in some way. So it's, it's, it's a pretty neat way, actually, to provide this additional information. And one of the key things, you know, you're a much better, you know, much more expert than me in this field. But, you know, one of the difficulties I understand them, and trying to get force estimation, in robotics, and robotics, arms and fingers, is actually the sensors that measure forces are not at the tip of the of the drives, a lot of times, right, so this is, you know, so when you get forces emissions,
(1:17:17) Audrow Nash
this is specifically a problem when you have like a high gear gear ratio, if you have low gear ratios, a lot of times you can infer the force at the tip pretty accurately. But if you have a high gear ratio, there's all those gears in the way and I don't know if this is correct, whatever it makes it, so it's hard to do.
(1:17:34) Youssef Benmokhtar
Correct. But in our case, here, whatever, you know, in our case, we actually can provide you with a local force estimation at the tip, which could be also very interesting, depending on the applications.
(1:17:46) Audrow Nash
So for for the local force estimation, at least with the digit, it probably can't, like if I I'm imagining, like I had a robot that was like going to pick up something with a forklift. And I wanted to feel what's on the end of the forklift. I imagine you can't put a lot of weight into these sensors, he has this shake in your head. But is there any way to make them more robust, because like, if I if I'm trying to grab something that's heavy, I probably can't use the sensors on the end of grab them, I would say, you know, you saw you shaking my hand in my head there. Because in the case of DIGIT, right, if you putting very heavy weights on on a hand, you know, equipped with DIGIT, I just think that I think the mechanical housing would just crack. That's was when I was just not meant for that. I don't think the issue is really the gel as much. But the key again, as I said very early in our conversation, you could design, you know, the sensor for different applications, right. So if you're basically looking at an application where you need to apply a lot of force on the sensor, then you can design a formulation that allows you to enter to do that, you can also design, you know, the thickness of the gel to be, you know, maybe thick enough to allow for, you know, a lot of pressure to be applied or thin enough to be able to apply a lot of pressure. So, we actually have done that. We, we, you know, we have a customer in forensics, who were doing some role. Yeah, we were doing some tests, like like crime ski. Yeah, absolutely. They, they basically use our technology to measure bullet casings, and compare them to do to a central database to basically be able to provide like a biometric. You know, confirmation that that case came from this gun. You know it with a with a 90 plus percent confidence. It's actually used in, in, in trials and in crime labs and things like that. So that's true, pretty Pretty wild, but in that particular application, they use more of a benchtop type system that from us, but, you know, they can apply, you know, six to 10 kilograms of force. Very locally, very, super localized in a small area. We've done some tests on some other things, you know, where we we go to 1520 kilograms of force, again, a very, very, very tiny area. So I'm not, you know, I think our team could design things to be, you know, used in applications where there's a lot of weight, apply to them. It just needs to make, you know, business sense, to be honest, I think that's really the main, the main challenge main
(1:20:44) Audrow Nash
limiter. Yeah. Gotcha. What about, what about a sensor without bevels? I'm thinking about it. So you have currently the edges around the gel pad. I'm wondering, like I imagined for robotics, it'd be really nice to not have those edges, because you're limited. If I'm trying to touch into a hard surface, does the gel extrude? A little bit past? bevels? It's not so you can? Gotcha. Yeah.
(1:21:11) Youssef Benmokhtar
Well, maybe I should, yeah, it should not interfere. But I understand your point. I think if you really want to mimic the human finger, you should not have any bezel at all, and you should actually have a curved sensor. There is hard probably, it is hard to make. But it is not. Again, conceptually, you know, are the technology as I described it, right of having some kind of a, you know, polymer like material, some illumination, a camera behind it, as long as you can image that curved area, and be able to reconstruct that 3d surface of it. It works. So I don't think there is again,
(1:21:51) Audrow Nash
a just you can calibrate it pretty robustly, regardless of its shape. I would think so then
(1:21:58) Youssef Benmokhtar
absolutely. It is a complex design, a probably will take, you know, a lot of development to make something like this. But fundamentally, I don't see any limitations from the technology, it wouldn't actually be feasible to do it with curved sensors.
(1:22:13) Audrow Nash
That would be cool. Yeah, I was just thinking, maybe they'd be like edge effects. I'm imagining like a soft mattress or something where the edges are easier to fall off than the middle because it's kind of gives more on the edges. And I was wondering if that would make it more difficult to sense accurately on the edges. If you had no edges, but perhaps it wouldn't matter, because you just calibrate what it looks like for transformations?
(1:22:42) Youssef Benmokhtar
Right, I don't think would be an issue.
(1:22:45) Audrow Nash
Gotcha. Okay, so I would like to segue a little bit to talking about you for this. So you've mentioned that you've been in tech for the last 25 years. And did you say you have been leading in tech for 25 years? Or what did you say at the beginning? Yeah,
(1:23:03) Youssef Benmokhtar
I've been kind of, uh, you know, in leadership positions, you know, for 20 plus years. I spent half of my career in the semiconductor world. So I I am still biased to being a chip guy. At St. microelectronics, I worked. In the end, a fab that is now closed in Phoenix, Arizona is where I started my career after grad school. In operation management, at that time, I'm an industrial and system engineer by trade
(1:23:37) Audrow Nash
was the first thing what and
(1:23:39) Youssef Benmokhtar
Industrial and Systems Engineering. That's what I studied in, in undergrad and grad. And so worked in Phoenix, I worked in France, in semiconductors when I left the semiconductor world, I was leading the eight bit microcontroller business unit at St. Micro. Okay. And came back to the US to join a company called Digital optics Corporation, which I moved from like from electrons or photons at the time. Yeah, it was a company that was using wafer level techniques. So still leveraging my semiconductor experience by actually build micro optics, you know, diffractive optics waveguides and very tiny refract reflective lenses refractive lenses.
(1:24:30) Audrow Nash
I'm imagining just tiny cameras,
(1:24:32) Youssef Benmokhtar
very tiny cameras, very tiny cameras,
(1:24:35) Audrow Nash
because I don't know what a lot of the words you've just said mean. Yeah.
(1:24:39) Youssef Benmokhtar
So, so I did that for for a while. That's actually where I, you know, met the FLIR guys that were developing Krypton at a time but I can't I can't really say much more than that. But I know a lot about leptons. And we we sold actually that that activity to flare. That's public information. And I decided to move to California at the time because as a technology geek that I am, I've always felt that where else can you be but in Silicon Valley. So after we sold the site in North Carolina, I moved to the Silicon Valley, I worked for OmniVision, for a little bit, launched their liquid crystalline silicon business. And joined some friends of mine that were in a company called Photo nation, which was a computer vision company. core expertise in, in face technologies. They've been doing it since the mid 90s, I think before many. So face detection and tracking, eventually recognition and eye tracking, Iris, iris recognition, things like that, like that cool. And got an opportunity to join, Magic Leap do matter reality startup where I spent almost five years as their as VP of Business Development, which helped me really get access to incredibly amazing technologies, because to build the kind of AR headset that we were building, you're basically at the edge of innovation on every front. Oh, that's cool, which was amazing for me. So amazing learning experience, and decided to take a break, spent a little bit of time with the family. But I was as I was doing that, I got introduced to JL sites, and really fell in love with the technology, it kind of had that similar wow effect when I saw it the first time as I had when I experienced Magic Leap in, in late 2015. And I started digging into it. And I basically noticed that companies in every sector showed interest in GelSight, and I and I really, you know, it really intrigued me like it really piqued my curiosity, to say, so what should we really be able to do with this company, you know, and, and after I met the, the founders team, and, and the staff, I basically decided this was the right choice for me is to join, joining the company, it didn't help it grow towards this vision of making digitized, you know, their sensation of tactile sensing. Reality. And more importantly, I think just bringing tactile intelligence to the world, and you know, so basically bringing these two pieces of imaging based digital tactile sensor, plus AI, as the way forward, you know, for the for the company. And that, to me is super exciting. And, you know, I've, I've enjoyed every, every minute of it.
(1:27:54) Audrow Nash
Now, one thing that strikes me hearing about kind of your path, is you kind of like, pivoted and kept learning and like, I imagine reinventing yourself, like each stage. Can you just tell me a bit about that, like the experience that kind of your heuristics? And I don't know, how do you? Yeah,
(1:28:13) Youssef Benmokhtar
that's a great question. I think what gets me excited, is learning new things every day. And, you know, it needs to have that component in my life to motivate me, and help me get up in the morning. And, and that's what I've been doing. You know, every time even when I spent 13 years at St. Micro, I basically had maybe five or six different roles, because I kept kept moving and being promoted within the company. Because as soon as I, you know, I kind of figured it out, I started getting bored on I need a new challenge, I needed a new challenge. And, you know, the company had been really great to me by offering many, many ways for me to get to advance but it made me to learn. And I always take it as a person who will challenge to, to learn about new technology, and to envision the best that can happen with any technology that I'm that I'm, you know, getting the chance to work with. And that's just who I am. I'm just driven by, by this constant need to, to learn. So yeah, that's what drove me here.
(1:29:34) Audrow Nash
How do you find something that's exciting? Like how does gels like get on your radar? Or how does augmented ray or the the headsets you were working on? How do these get on your radar for you to consider switching your area to and keep learning in that direction?
(1:29:53) Youssef Benmokhtar
You know, I've been blessed over the years to, you know, build a network of people. that, it first of all know me, and they know what I'm capable of doing. But it also made me know that I need that, I need that new challenge. So I've had actually the opportunity to, you know, like the Magic Leap opportunities just to a friend of mine at our work within the optics company, he's just like us, if this is for you, you need to come and please take a look at this. And that's what really happened. And I think in gel side, it was a bit of the same. It just basically, by surrounding yourself with people that appreciate you for who you are, and actually just know you, and they know, you know, when something really interesting is out there. You're the guy they think about, it's like, Hey, you should take a look at this. You might be interesting, interesting to you. So that's what I, that's what I've done.
(1:30:50) Audrow Nash
Gotcha. That's cool. It's interesting how the network can kind of make you aware of different opportunities. And but you hear about everything through your connections, in a sense, they kind of filter correct the topics, and also can recommend, like, Hey, this is super cool. I think you'll love this.
(1:31:08) Youssef Benmokhtar
And, you know, the other thing I would add is, like, my key motivation is that curiosity. So it's not about the, necessarily about the the short term financial stuff. So, you know, when I, when I left Magic Leap, I had the opportunity to join other companies, and much bigger companies that would offer me a package also can never afford, and even if we have a very nice, yeah, so So even if we do really good, you know, it might not even match what I could have done somewhere else. But it was not interesting. You know, it felt like, you know, to me, I would be one person amongst many not necessarily working on the most exciting stuff, or they being very limited in your scope. You know, that that's
(1:32:01) Audrow Nash
not interesting. Yeah. And that's not a good use of your time,
(1:32:05) Youssef Benmokhtar
I can't see myself being happy there. I think to be happy, I just need that, that I need that challenge. I need to be learning some things. And that's why I'm here. That's great.
(1:32:20) Audrow Nash
See, so. All right. What advice do you have? For someone, let's say, finishing university? Given like, the current climate of everything, what advice do you have for them?
(1:32:42) Youssef Benmokhtar
I would say, you know, really work at identifying what really gets you up in the morning and, and gets you excited about contributing to to our world in a positive way. You know, at the end of the day, it's, it's what's going to matter in the long run. It's not the material things, it's really about, you know, Have you have you had fun, you know, have you learned things? Have you helped others learn, have you out helped others grow? Up, you know, that, to me is really, I think what young people should, not young people everybody should really think about, that's the advice I would give, that's the advice I give to my own daughters. But that's what I would say it's just, you know, you have the foundations, you've learned things. That's just the beginning in life is going to be about continuous learning. And, you know, do something where you can leverage what you've learned, but also when you're never stop learning, and where you can actually then, you know, share your experience, share, share what you've learned with others, you know, help help others also get ahead. I find that find out to be find that to be very satisfying. You know, for me, you know, that the best thing that I once in a while hear about is my, some old, you know, colleagues of mine from many jobs ago, you know, tell me, you remember that advice you gave me, you know, on such a day? Well, I just wanted to let you know, I did it five years ago, and I feel that was the best decision ever. That to me is you know, when I hear things like that, which I've been, again, lucky to hear quite a few times in my life. That that what makes me happy and I want when I hear that I was like, Okay, I've done something. Good. So that's, that's my advice.
(1:34:46) Audrow Nash
Let's see. And then one last question. Is there any specific things you do or habits or things like this that you think have been very useful to you?
(1:35:07) Youssef Benmokhtar
I don't know if I have good examples there because I think I would be scolded at. At home. I think it's also spending time, you know, by yourself, I think just, you know, we've just taken time for your own, you know, your own being just well being just reflecting on what matters, you know, to you enjoying, you know, this this earth that we live on, you know, I do a lot of, you know, I enjoy that very much. I'm kind of a, sometimes too much of a loner, but I enjoy that. I mean, I enjoy taking the time to listen to the wind blowing on the leaves out there in Boston is really nice right now. And you know, things like that. It just helps you to be grounded, I think and just enjoying the little things. I like that, you know, very much it has helped me. The other thing is I don't sleep much. So if you don't sleep much. You have time to do work.
(1:36:17) Audrow Nash
All right. Do you have any links or website like contact information, anything you'd like to share with our listeners? Absolutely. I mean, I would encourage everybody to visit gelsight.com. We also have our contact page there. If you're interested in what we do, or if you just want to give us comments, please contact us. We would love to hear from the audience. You know, we you know, we're here to basically help digital touch happen. So if you're interested in that topic, we'd love to hear from you.
(1:36:57) Audrow Nash
Awesome. Okay. It's been a pleasure.
(1:36:59) Youssef Benmokhtar
Thank you. Same here. Absolutely.
(1:37:02) Audrow Nash
Thanks for listening to this conversation with Youssef Benmokhtar. Thank you again to our founding sponsor, Open Robotics, and hope to see you next time