Novel LiDAR sensing technology: results from new tests and road trials


[Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] All right. So, thank
you very much everybody for being here. Thank you for the great introduction. So, I’m actually here to talk about a few things about Lidar, I’m following a great presentation by the way, so we’ll look a little bit about sensor technologies and how we’re going to go from ADAS to the autonomous cars and then the evolution of Lidar – because that’s where my background kind of comes from – and then Lidars in the context
of the endcap requirements which are something of actuality for the mass
market. But before beginning, a quick introduction about LEDDARTECH. Basically, we developed a proprietary technology
we called Lidar, it’s protected by 54 patents. We have extensive expertise in Lidar development, Lidar sensors and the applications that go around them. We have partnerships with industry leading global companies, one of which is Valeo, who most of you probably know. And our sensors have accumulated over the years more than 20million
hours of operation in 24/7 outdoor environments, mostly from our background in the transportation industry, so similar conditions as in the automotive but outside of the cars on the side of the roads instead of being in the cars. But the reality of the environment are pretty much the same and we’ve been in the automotive industry since 2011. So, autonomous cars have been a long-time
dream, we’ve seen pictures starting in the 50s talking about autonomous cars, they were also talking about flying cars but at least one of these two
dreams is slowly becoming a reality. So, most of you have heard about all
of these great projects, one of which is the Uber, more recently, that started a pilot project in Pittsburgh. And of course, cars aren’t completely
autonomous right now, these are pilot projects, so you still have two engineers monitoring everything in there but we’re getting very very close to autonomous cars. Of course, this is viable as we heard in the previous presentation in taxi or ride-sharing applications,
where the price of the car doesn’t matter as much and the looks of the car doesn’t matter; so, if you have big sensors, expensive sensors, it doesn’t really matter, it’s a great first
industry for the autonomous cars. And on the consumer market, should I say, for everyday cars, Lidar and sensors in general in these systems will have to become smaller and cheaper before they go to mass market. And it’s a consensus in the industry, I think, that Lidar
will be part of this of this future. We’ve been hearing in the past years a lot about Radar and cameras but a lot of people right now are agreeing on the fact that that Lidars will have to
be part of the equation if we want to have a fully autonomous system and good autonomous cars and driver assistance systems. So, I have here a small table comparing the pros and cons of each of the of the sensors. As you can see, lidar overcomes some of the weaknesses of each of these technologies, so
it’s a great third addition to this. Of course, Lidar on its own won’t be
able to do everything, the future is really in the fusion of multiple sources of information but we think that Lidar will be one of them. So, today, the
Lidar providers – we’ve met the Lidar providers here and you’ll have basically three or four categories of Lidar is
in there. Number one are the scanning Lidars, and as we heard in the past
presentation, they’re basically more expensive, quite big but they provide
a lot of information for mapping and people are developing these algorithms and these big sensors and they’re very useful for that but of course for mass production, it’s more of a challenge, the sensors will have to get smaller, and there’s also some reliability issues
with the scanning movements. Then you have the solid state Lidars; some of them are built on some older technologies that performed okay but had very limited range, very limited performances. Then you have some others in there
that has very promising technologies that are very revolutionary but has
yet to be proven, they make a lot of promises but we have yet to see anything coming from them. And then there’s
also LEDDAR, which is basically a new generation of solid-state time of flight sensors, solid-state lidars which
is ready for mass production and mass market right now through Valeo who has the first generation of LEDDAR and we’ll see the roadmaps going forward with that. And before we go on, I just want to give a small explanation on what is LEDDAR and why it’s so good. I won’t go into too much technical details
but if you want to come see me at the booth upstairs on the exhibition, I’ll be glad to explain a little bit more about it. But basically, LEDDAR relies on digital signal processing of the signals coming from the photo detectors, so instead of relying on simple circuits to do the distance measurements,
we fully digitize everything, filter the waveforms and then use multiple
consecutive waveforms to crunch them together and then do
full waveform analysis. This gives us three main benefits; number one, higher sensitivities, so we found that we have a higher sensitivity than any other optical time-of-flight technology, which in turn gives either an increased range and/or lower power consumption and/or lower cost. The ability to have a fixed diffusional
nation instead of having to focalize the beam into in a small area and then scan around it. Then, we have immunity to noise and what I mean by ‘noise’
could be either direct sunlight; so, we’re actually able to have sensors that perform well outdoors with wide field of view without the use of any optical filters. They simply filter out the direct sunlight and it doesn’t have any significant impact on their sensors performances. Noise could also be rain, fog, any kind of harsh weather conditions, small particles that are in the air, these are also filtered out by our algorithms and they don’t affect the
performance or the quality of the measurement themselves as much as other Lidars do. And finally, there’s also
no performance effect from having multiple sensors facing at each other. So, earlier mentioned previously, when you’re in a car in a traffic jam, there’s going to be a lot of sensors, a lot
of cars, each with their own Lidars pointing at each other’s, looking in the same direction and LEDDAR technology
is actually immune to that, so we’re completely immune to other sources of
both slides or other sensors, they’re discarded as white noise basically and we’re sensitive only to our own pulses. And finally, we have a few signal processing features that we’ve regrouped in a single benefit which allows us to get a very high dynamic range, so in the same frame, we can detect very small signals and very high signals with
very good accuracies. So, even if you have a very reflective target next to
a very dark small target, we’ll still be able to detect both of these, we handle saturation very well so there’s no problem there and we can also do
multi echo in the same pixels down to 15centimeters of separation. And now, the evolution of LEDDAR towards autonomous driving. We feel that Lidar systems
in general and more specifically, LEDDAR will be able to address a lot of different applications ranging from the standalone ADAS, the basic functions to the higher resolution autonomous applications very very quickly as well as sensor fusion with the other sensors. So, here you have our roadmap for
the next few years, just a quick word about our business model. Instead of
doing like the villa dines and the corner G’s of this world and providing a single product, what we want to do is bring a chipset to the market that the tier ones can adopt and use any of our reference design to build their own
implementations of these Lidars, they will also be able to build their own
implementations from scratch if they want, meaning that they can create very different solutions not only from one another but also going from the simpler level one applications up to the level 3 level 4 applications where you need
more resolution based on the same technology, which is a little bit like what they’re doing right now with the Radar when you think about it; they all use the same technology or a technology with similar principles of operation
and they each build their own differentiated
solutions. So, we’ve announced three generation of chips; the first
one is in conjunction with Valeo with 16 channels and then we’ll have 32 and 64 solutions with higher resolution. Here’s the implementation that Valeo
is doing and that you’ll see on our current generation of products, so it’s a very simple 16-pixel device. It has an area of 16 PI in photodiodes all lit up at the same time. There’s a
single light source that pulses the light and 16 individual measurements or more if we have multiple echoes in the same channel are taken. So, this is
very good for level 1, level 2, DD, NCAP, AB type applications will work very well. Valeo will, for example, have a range of about 50 meters away for pedestrians with their solutions. Then on to the next generations, we’re looking at 1 by 32 flash Lidar, so the same principle as before but with more channels, higher resolution, with a 45-degree field of view and a simple laser diode, we will have 40 meters of range with pedestrians and 180 meters of range with vehicles. This is, of course, one of the possible implementations
different laser diodes, different parameters here could change and create different solutions with different specs. Then we get on to higher resolution
devices. With the same chip, we’ll be able to do a 32 by 8, so the full 3D
Lidar with the same range as the previous solution and you have the form factor of what it will look like in here. Then here’s a small example of the
prototype one of our partners built with using multiple sensors, what the visualization will look like with these prototypes. Then on the next generation, same principle – 1 by 64 – but much
more range on this side, so the LCA 3 will have much higher sensitivity and
therefore, a much higher range of 120 meters on 45 degrees for the pedestrians with this solution. And just like
with the previous chip, we’ll have a 3D flash Lidar solution using multiple emitter sequencing. This one, once again, 120 meters of range with pedestrians on 45 degrees by 30 degrees field of view. And finally, this chip will also support higher resolution, MEMS, micro mirror scanning, so that we can get a high resolution less than .25-degree resolution on both axis on 60 by 30 or 60 by 20 degrees, it could be doubled up to get 120 degrees on the same location on the car. And this will address level 3 and levels 4 applications and then move us a step closer towards full automation. And these are simply
a few examples of the possible implementations of these
technologies, these are some concepts that we came up with but the future implementations are limitless; a lot of people could do a lot of different things with the same powerful technology. But if we come back a little bit to the reality of today, the NCAP requirements have been updated lately to include the AEB requirements. There was also an announcement made
by the NHTSA very recently that says that all vehicles will be required to
have AEB functionalities by 2022. So, we know that the AEB functionalities
are what’s going to drive the market for the next few years, that’s what
is going to be produced in very high quantities very very soon.
As a corporation, in 2015, only 1% of the vehicles
had this functionality standard and 25% on option, which means that the potential for growth is extremely high. So, a lot of different sensors can do these functionalities. We have the stereo cameras but they’re a bit sensitive
to weather and lighting conditions. The long-range radars are kind of too narrow to see the pedestrians at short range when they’re coming from the sides. The short-range radar and medium range radar are pretty good but they usually have a low distance resolution, low accuracy and they’re a little bit more expensive than the LEDDAR and of course, the high-resolution scanning Lidar a little bit overkill for this
kind of application, they’re way too expensive. And from what we’ve gotten, the ideal cost to an OEM for a sensor like this would be around 50 USD, which is what Valeo is promoting for their
solid-state LEDDAR-based Lidar. So, we feel that LEDDAR will be the best enabler for all of these AEB applications in the very near future. So, we’ve
actually taken to the road one of our sensors, we’ve actually taken an equivalent of this solution that you see right here using our current discreet
implementation. So, it’s larger than what you see there but the functionality is equivalent. We took that on the road and had them perform the NCAP test. So, a first NCAP test is with the back of a car, you want to make sure that the car will brake when it sees either a stopped car in front of it or a car that’s going slower or a car that’s going to be braking in front of the vehicle. So, we have a video here, could you click on the play button on the PowerPoint please? All right, looks like the video won’t work today but we’ll
have them at our booth if you want to come and see them. We’ve performed
these tests very recently, so we see in the video the data from the LEDDAR and we clearly see that the LEDDAR will detect the car before their current
vehicle has to brake, thus preventing the impact. The other tests in the NCAP requirements are the pedestrian, both
running and walking pedestrians, at speeds up to 60 kilometres an hour. We’ve also performed these tests, come by our booth to see them. We basically see the adult baited at 40 metres away,
which is plenty of time even at 60 kilometres an hour to brake. We’ve also done the test with a small child coming out of behind an obstacle and once again, the obstacle was detected before the car had to brake and thus, LEDDAR would be able to prevent the collision in that case. We’ve also made more road tests on the 16-channel solution. On the right, you’ll see the final production sensors and on the left, the prototypes that we used, they’re basically
based on our discreet implementation until the Asics, our sensors are a little bit larger but it doesn’t prevent us from doing any tests. So, we made
a 20-degree sensor that would see pedestrians at about 60 meters and cars at 150 metres and then a 90-degree laser that would see pedestrians at 12 meters and cars at 45 meters, so this one would be more suited to do a blind spot monitoring, for example, or a basic short-range cross traffic ADAS in
that case. And then the longer range to 20 degrees would be for a front and backwards collision. So, we had a video there, I guess it’s not going to be working. One thing that’s interesting in here is to really note how you can see multiple objects even if they overlap
on the same segments. This is a very nice environment, it’s pretty complex
but we can clearly see the car over here and then these big pillars will come over here and we see most of them or well actually all of them that are
in the field of view. So, even though it’s somewhat lower resolution than what a Validein does, it still gives you
a lot of information of what’s going around it. We also set the 95 degrees
on the front of the car to do the oncoming traffic alert and the blind spot detection, you’ll be able to see these videos in our booth. This is the 20-degree sensor running in foggy rainy winter Quebec City, the typical Quebec City environments, and it detected the cars beyond 100 meters very easily, even in these conditions. So, of course, beyond just the sensors, the AEB
systems has to do much more than just have a sensor and locate its data, the sensors are only the first step or one of the pieces of the puzzle because,
as you probably know, not all of the objects that are detected by the sensors will incur a risk of collision and therefore, not all of them should trigger a braking signal. So, some object tracking algorithms have to be added beyond that and trajectory prediction have to be added on top of all of this data
to estimate if there’s going to be a collision or not with the object. This is something that typically is not
included in the LEDDAR sensors, would be done by our Tier one customers but we have a lot of expertise in doing so, so we can assist our customers in developing these kinds of applications and in these kinds of systems, we can even provide algorithms because we have experience and developed some of these in the past. So, there are basically four first steps in the decision-making process when you want to make your car brake or not. Number one is look at the data from the sensor and segment the data points into different groups, either by proximity, by similarity
or by the using edge detection algorithms but basically separating them object by object. Then we want to do this over consecutive frames in time and do associations over time of these objects in each of the consecutive frames
to see where they are located at each point in time, which will give us an estimation of their speed, their direction and then we can do some trajectory estimation that runs on our algorithms there. Typically, we’ll use predictive filters like Kalman filters to improve the accuracy of these algorithms, they will also give us an estimation of the confidence level of the trajectory estimation, so the decision-making
can be a little bit more in-depth. And finally, we will have to make the
decision; so, is the object going to collide with the car or not? If yes,
do I have to brake now, can I wait a little bit, what’s the confidence factor on that? The final decision on the
AEB system is done in this step over here after all that has been processed. So, that’s all I have for you today. The key takeaways, basically, I would
like you to leave today with are that solid-state Lidar are a key element to ADAS and autonomous driving roadmaps
for mass-market vehicles even though we see the big Valedein rotating Lidars on the other taxis today, that will
probably not be the case when the autonomous
consumer cars come in the market. That LEDDAR technology is a highly
optimized automotive grade Lidar and that next gen low-cost Lidars are able to meet the NCAP requirements. As of today, we have LEDDAR core chips coming with the SP libraries that we’ll be
able to address all of the other functionalities from simple ADAS to high density 3D point cloud and that close collaboration between everybody in the industry in creating an ecosystem will be the key not only for multiple types of sensors, creating the sensor fusion and having people create very reliable tracking algorithms, but also in the Lidar side. We will be making sure that in conjunction with our chips, various partners offer different parts that are compatible with this technology, so our customers will be able to choose
between different photo detectors, different emitters, different emitters scanning principles if needed, different processors if they want to do further
processing on the external side and basically, oversee and develop all this ecosystem. So, thank you very much. Moderator: Thank you very much, that was very interesting. [Music] [Music] [Music] [Music] [Music] [Music] [Music]