Introducing the world’s fastest 3D structured light system to date
Experience event-based, ultra-fast 500Hz point cloud generation, in a plug-and-play evaluation kit
Prophesee Metavision® EVK3D is the fastest structured light embedded system to date, by the inventors of the world’s most advanced neuromorphic vision systems.
This plug-and-play, ultra-fast structured light evaluation kit can output depth maps at the unparalleled speed of 500Hz while maintaining a <1.5% RMSE.
The evaluation kit features at its core the revolutionary IMX636HD Event-Based Vision sensor, co-developed between SONY and PROPHESEE and a breakthrough addressable VCSEL array projector.
3D just entered a new dimension.
500Hz
Point cloud generation
<1.5%
RMSE @500Hz
IMX636
HD Event-based Vision sensor
3x3x4mm
Ultra-compact VCSEL illuminator
1,152
Individual laser points
Plug & Play
Out-of-the-box point cloud in universal format (.pcd .ply)
500Hz
Point cloud generation
IMX636
HD Event-based Vision sensor
1,152
Individual laser points
<1.5%
RMSE @500Hz
3x3x4mm
Ultra-compact VCSEL illuminator
Plug & Play
Out-of-the-box point cloud in universal format (.pcd .ply)
30mm baseline
0.65m* max distance
50mm baseline
1.0m* max distance
100mm baseline
2.0m* max distance
*at RMSE 1.5%
LEADING EVENT-BASED VISION SENSOR
IMX636HD Event-based Vision sensor realized in collaboration between SONY and PROPHESEE.
WORLD’S FASTEST STRUCTURED LIGHT EMBEDDED SYSTEM
Enabled by the unprecedented combination of our event-based vision sensor, VCSEL fast driving technology and versatile Xilinx Zync ultrascale+ FPGA.
COMPLETE SOFTWARE SUITE INCLUDED
Comes with Metavision EVK3D Viewer and Explorer that allow both control, and configuration of the projector, sensor and FPGA processing.
PLUG & PLAY EXPERIENCE
Comes pre-calibrated, with out of the box point cloud generation capability, straight from the EVK, in a universal point cloud format (.pcd .ply).
IMX636HD
Start evaluation of breakthrough stacked Event-based Vision Sensor realized in collaboration between Sony and PROPHESEE.
This sensor was made possible by combining Sony’s CMOS image sensor technology with Prophesee’s unique Event-Based Metavision sensing technology.
KEY FEATURES
- Resolution (px) 1280 x 720
- Optical format: 1/2.5”
- Pixel latency (μs) <100
- Dynamic Range (dB) >86* / >120**
- Nominal contrast threshold (%) 25
- Pixel size (μm) 4.86 x 4.86
- Event Signal Processing embedded
* DR >86dB (5 lux – 100000 lux) 5 lux is the minimum light level guaranteeing inclusion of all possible operating points.
**DR >120 dB (Low-light cut-off 0.08 lux – 100000 lux) Low-light cut-off is the minimum light level guaranteeing nominal contrast sensitivity. For many typical applications, the sensor data are actionable down to this light level. 100000 lux is a virtual high-light limit, not experienced in experiment.
The integration takes advantage of Prophesee’s patented event-based Metavision® sensor which features pixels, each powered by its own embedded intelligent processing, allowing them to be activated independently.
This patented technology drastically improves the performance and efficiency of visual data acquisition.
Combined with a breakthrough 940nm wavelength multi-junction 1D addressable VCSEL array, the system delivers unprecedented 3D point cloud generation in the 500Hz range, higher robustness to fast motion and lighting conditions compared to state-of-the-art techniques.
Industrial
– Automated Guided Vehicle navigation
– Parcel size measurement
– Metrology
– Cobots Safety
XR
– SLAM
Mobile
– Room scanning
– 3D object scanning
WHATS IN THE BOX
Q&A
Why use an event-based sensor instead of a frame-based one for structured light?
The low latency capability of the sensor allows to detect ultra-fast light modulation, this enables for time-coded structured light. As opposed to a pseudo-random spatial dot arrangement, ambiguity in the pattern is raised with time encoding. This allows lightweight algorithms to detect and extract light patterns. In addition, only the pixels seeing the light pattern generate events and occupy bandwidth and processing power.
How does the EVK3D compare to a ToF (time-of-flight) system?
The EVK3D computes depth information based on a triangulation principle and not time of flight: the depth measurement error increases with the distance squared and is inversely proportional to the baseline.
What is the depth range of the device?
The range limits come from two different considerations :
Depth accuracy range : Depending on the baseline chosen, the same relative depth accuracy is reached at different maximum distances, our system reaches 1.5% RMSE at respectively 0.65, 1.0 and 2.0m for the three variants
Laser detection range : Depending on the light conditions the distance at which the emitted light can be detected by the sensor changes : in bright sunlight conditions the range is reduced.
Why is there a minimum distance?
Minimum distance figure comes from the algorithm implemented in the FPGA pipeline, the disambiguation process works within a certain distance range, closer to this, some projected points will be wrongly disambiguated and provide error points.
Can I change the triangulation algorithm myself?
The algorithm running in the FPGA fabric is one implementation developed by Prophesee, but we are open to helping our customers develop their own version.
Why use VCSEL technology?
VCSEL technology is one of the available choices for a structured light illuminator, it enables small projector footprints and high industrial volumes.
Can I stream events with an EVK3D?
In its current form the EVK3D is meant to evaluate the point cloud data generated, but we are open to helping customers having access to lower levels of processing.
What is the laser class of the EVK3D?
The EVK3D is a class 1 laser device. Its light emission reaches less than 2% of AEL (Accessible Emission limit) described in the norm IEC 60825-1:2014.