Impact-Site-Verification: dbe48ff9-4514-40fe-8cc0-70131430799e

Search This Blog

Vision Processing for FPGA using MATLAB

Part 1: Vision Processing FPGA and ASIC Hardware Considerations

- Learn about FPGA Image Processing: http://bit.ly/2Xy3AUp Computer vision applications in automated driving often require fast processing to condition the incoming image data. - Free self-guided tutorial: http://bit.ly/2XtIK8C - Download a trial: http://bit.ly/2XudDKa FPGA or ASIC hardware accelerate vision processing, but algorithms need to be adapted to work on hardware. Learn about the high-level architecture of this hardware fabric, the constraints that must be met for efficient implementation, and how Vision HDL Toolbox™ helps you make the transition from algorithm to hardware. The topics covered in this video include: • What parts of an automated driving application are typically implemented in hardware versus software • The difference between processing frame-based data and a stream of pixels • FPGA and ASIC architectures and constraints • Using line buffer memory to perform operations on a “region of interest” from a stream of pixels • Why it’s important for video and image processing engineers to collaborate with hardware implementation engineers




Part 2: From a Frame-Based Algorithm to a Pixel-Streaming Implementation

Often these algorithms are written and tested in MATLAB®. Learn how to reuse your MATLAB work to test and check the results of the Simulink hardware implementation. Details include:


Using MATLAB, Automated Driving Toolbox™, and Computer Vision Toolbox™ to develop and test a lane detection algorithm
Passing data between MATLAB and Simulink using workspace variables
• Converting the frame-based input to streaming pixels using the Frame-to-Pixels block, which automatically converts sample rates
• Setting up and running the Simulink hardware implementation from the MATLAB script
• Comparing the results from the hardware implementation versus the reference algorithm



Part 3: Hardware Design of a Lane Detection Algorithm


Learn about hardware implementation techniques such as:


• Using system knowledge to reduce the amount of computations required in the hardware
• Designing custom control logic with a MATLAB® function block
• Computing averages from a stream of data using a rolling window
• Redundant “ping-pong” memory buffer to keep pace with the incoming data stream


Part 4: Targeting a Lane Detection Design to a Xilinx Zynq Device
Learn how to convert data types to fixed point and generate optimized HDL with AXI bus interfaces using the HDL Coder™ IP Core Generation Workflow. Details include:


• Visualizing and adjusting fixed-point data types
• Using the HDL Workflow Advisor to generate VHDL
• Setting up and using the IP Core Generation Workflow, mapping the inputs to AXI Stream for Video and the outputs to AXI4 Lite interfaces
• Determining the required clock frequency for processing this video input format
• Generating VHDL and analyzing the results

Part 5: Hardware Software Prototyping of a Lane Detection 
Design
Vision processing algorithms are compute-intensive to simulate. Once a design has been verified as much as possible with simulation, prototyping on an FPGA development kit allows for real-time processing of live video input.


Download the Computer Vision System Toolbox™ Support Package for Xilinx® Zynq®-Based Hardware: https://goo.gl/cWcKRz


This example adds a hardware-software interface to the lane detection example and uses the Computer Vision System Toolbox™ Support Package for Xilinx® Zynq®-Based Hardware to efficiently build a working prototype.


Learn how to:


• Use HDMI video input to Simulink®
• Design hardware-software interface control functionality
• Generate HDL and software drivers with AXI4 interfaces
• Deploy software to a connected Xilinx Zynq device
• Run a prototype in external mode

1 comment:

Popular Posts