Processing video frames require a large amount of computation power, therefore processors may not be powerful enough.
The most common solution is to use FPGAs, which provide a good alternative allowing the implementation of parallel modules to process all the information a lot faster than a processor.
Contents
Introduction
The main topic of this article is how to process video using an FPGA with VHDL as a hardware description language. The target board is in this case ZYBO board from Digilent based on the Xilinx SoC Zynq family z7010.
The main function is to create a small functional project, documented and with some application examples, where other users or students can add easily their own VHDL code using Vivado HLS for creating a video processing final application.
In the appendix, a small tutorial is inserted explaining how to use Vivado from the beginning until the creation of the video processing application.
Video processing in the Zybo board
The Zybo board have one HDMI and one VGA port. Each of these video connectors could be used as a sink or as a source, in other words, input or output. In this project, the HDMI will be used as an input because almost all the normal photo or action cameras they have an HDMI output, that can be used conveniently for this purpose.
Another Digilent Zynq-based development boards may have different sink/source video connectors. For example, the Nexys Video Artix-7 FPGA or Arty Z7, they have 2 HDMI port, one always configured as an input and the other one as an output.
HDMI–>; Video input
VGA –>; Video output
In this project, only the FPGA part of the Zynq SoC will be used due to the aim of this studying subject to develop a VHDL program. Also using the FPGA, a much more higher data throughput than using the microprocessor can be reached, therefor higher video processing speed.
The processor system (PS) of the Zynq chip could also be used to process this video by software. In the next table the most important pros and cons are resumed:
FPGA processing with VHDL | Software processing with ARM processor |
Low abstraction level video processing | Low, mid or high abstraction level video processing |
High speed and parallel computing High frame rate and resolution possible | Processing speed limited by the processor and possible bottle neck on the processor low frame rate and resolution |
The IPs (Intellectual Property) used in this project are provided by Digilent and the last version can be found on the official Github repository. These IPs code and decode from/to different video signal protocols. In this case, a conversion from HDMI to RGB raw video signal is needed at the input and a conversion from RGB video raw video signal to VGA at the output port. Of course these block modules could be done in VHDL by the user, but it would take lots of time, and we don’t need to reinvent the wheel.
The clock signal source requested by the HDMI to RGB converter block is 200 MHz (Data sheet [1] page 5). This signal is given by the processor clock.
The camera to use can be any camera with an HDMI output. Nowadays, almost every camera have this output. This code was tested with one “action camera” and a ‘normal’ camera Sony Cibershot.
Vivado block design
The block diagram shown in the Figure 3 was made to interconnect different modules. The following main IPs were inserted:
- Dvi2rgb: It makes a conversion from the video input of the HDMI into 24-bits RGB video. From Digilent library [2]. When the HDMI port is operating as a sink, the DDC function is needed for the connected source to read out the characteristics of the device. [3]
- Rgb2vga: it converts the raw input video signal into VGA output. From Digilent library [2].
- Processing_system7: This is a special block, that contains the configuration of the processing system of the Zynq. For this application, the output clock from the processing unit is going to be used.
- VideoProcessing: An user defined RTL module. Here the user-application code for video processing should be inserted and modified.
Additional to these blocks two constants are used to configure the HDMI port as a sink. [3]
To ease the user interface and to include some complimentary options, the sliders, buttons and leds available on the ZYBO board are routed to the application block. This allows a better functionality and choices to program the final video processing application. The block diagram adding these features is shown in the Figure 4:
RGB video Signals: a small background
To process video signals there are many protocols and codification technics. The aim of this program is to use raw video data because it is easy to handle and to understand.
Therefore the HDMI (High Definition Multimedia Interface) input will be converted into rgb 24 bit video signal, known as “RGB24”. In the same way, the modified rgb video output is converted into VGA video signal.
RGB24 video enconding can provide 16 million of distinct colours. A synchronization signal is necessary to display the long strings of pixels properly on the screen. The video decoders block implemented on the design, they use the called RGBHV synchronization. (RGBHV stands for Red Green Blue Horizontal Vertical)
This method of transmitting video (RGBHV) adds a horizontal synchronization video timing signal (pHSync) and another for vertical synchronization (pVSync). Both signals are independent of itself and of the colours, having in total 5 signals to transmit as shown in Figure 1.
Besides a video data valid signal indicates if the video is active or is a blanking period and a pixel clock recovered from the input TDMS clock channel.
Video Processing user application with VHDL
An RTL module was created and instantiated on the main block design. Inside this RTL module, the user can add and edit HDL-language code. This could be done with Verilog or VHDL language by changing the module properties.
To rapid edit the RTL module content, by pressing F7 after selecting the RTL block on the design. Also, it can be done by opening it from the source window.
The easiest application would be to drive straight the video signal through the FPGA and not modify it. This code will be used for testing the connections and as well the camera output signal. The example program can be considered as a “hello world” application. The code can be found on the Code 1.
Code: Drive through VHDL application
library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity VideoProcessing is Port ( vid_data : in STD_LOGIC_VECTOR (23 downto 0); pHSync : in STD_LOGIC; pVSync : in STD_LOGIC; pVDE : in STD_LOGIC; clk_pix: in STD_LOGIC; sliders: in STD_LOGIC_VECTOR(3 downto 0); buttons: in STD_LOGIC_VECTOR(3 downto 0); OUT_vid_data : out STD_LOGIC_VECTOR (23 downto 0); OUT_pHSync : out STD_LOGIC; OUT_pVSync : out STD_LOGIC; OUT_pVDE : out STD_LOGIC; OUT_clk_pix: out STD_LOGIC; leds: out STD_LOGIC_VECTOR(3 downto 0) ); end VideoProcessing; architecture Behavioral of VideoProcessing is begin --video signals OUT_vid_data(23 downto 16) <= vid_data(23 downto 16); --red channel OUT_vid_data(15 downto 8) <= vid_data(15 downto 8); --green channel OUT_vid_data(7 downto 0) <= vid_data(7 downto 0); --blue channel --Synchronization signals simply overdrive OUT_pHSync <= pHSync; OUT_pVSync <= pVSync; OUT_pVDE <= pVDE; end Behavioral;
Black and white or Grayscale video effect
This block aims to create a grayscale video output from a color RGB video input. For this purpose, the red, green and blue channel need to be unified to form a unique colour pixel for the three output channels. There are diverse ways of weighted sums, in this project the NTSC Rec.601 was employed, because is one of the most digital standard definition formats. This is due the human eye is more sensitive to green than red or blue. This formula try to compensate this human eye effect. [4]
Therefore, the equation is:
Y = 0.299·Red + 0.587·Green + 0.114·Blue
Weighted grayscale of the colours:
Red 29.9% 0x4C = 76
Green 58.7% 0x97 = 151
Blue 14.4% 0x1C = 28
Sum = 76 + 151 + 28 = 255
Extract for the algorithm for grayscale video converting
VideoEdition:
process(clk_pix) begin if rising_edge(clk_pix) then if sliders(0) = '1' then grayPixel <= (x"4c"*redSignal + x"97"*greenSignal + x"1C"*blueSignal); --weighted color end if; end if; end process VideoEdition; --output video signals OUT_vid_data(23 downto 16) <= grayPixel(15 downto 8); --red channel OUT_vid_data(15 downto 8) <= grayPixel(15 downto 8); --green channel OUT_vid_data(7 downto 0) <= grayPixel(15 downto 8); --blue channel
The output of this code can be seen on the Figure 6. There a normal camera, on the lower right corner, is pointing to the blue bottle.
httpv://www.youtube.com/watch?v=s6YoaqllQ4U
Pin configuration
The pin assignment is made after the synthesis is run correctly. This process can be done with the Vivado I/O configuration tool (see Figure 7), but this task could be tedious if the number of pins to configure is large.
In this case is more practical to do it by tcl instructions placed on the constraint file, due the total pins to configure is at least 15 for the HDMI and 18 for the VGA.
The configuration of the pins was made consulting the ZYBO schematic [5]. The I/O port definition of this project is stored on the file pin_descriptions.xdc
Alternatives and other libraries
This project was an example of using HDL to video processing, but for further development and high-end solutions may be considered the use of already on the market tools and libraries. Here two interesting tools are presented:
- Xilinx video and image processing library called ”Video Processing Subsystem” [6]. This library developed by Xilinx can help and ease to create video processing algorithms with useful IPs for Vivado.
- Matlab HDL Video toolbox [7]. This Matlab tool generate an HDL code using the technique of Model Based Design.
References
[1]Digilent, «DVI-to-RGB (Sink) 1.6 IP Core User Guide,» 2016.
[2]Digilent, «Digilent ZYBO Video IP Repository,» Mar 2016.
[3]Digilent, «ZYBO Reference Manual,» 2016.
[4]A. R. Adrian Ford, «Colour Space Conversions» 1988.
[5]Digilent, «Zybo Schematic».
[6]Xilinx, «Video Processing Subsystem Library pack».
[7]Matlab, «Vision HDL Toolbox».
Hi Alberto, I got VHDL code error when I try to do the code for grayscale. signal red, green, blue: std_logic_vector(7 downto 0); signal grayscale: std_logic_vector(23 downto 0); begin red <= vid_data(23 downto 16); green <= vid_data(15 downto 8); blue <= vid_data(7 downto 0); process(clk_pix) begin if rising_edge(clk_pix) then if sliders(0) = '1' then grayscale <= ((X"4C" * red) + (X"97" * green) + (X"1C" * blue)); –weighted color end if; end if; end process; –video signals OUT_vid_data(23 downto 16) <= grayscale(15 downto 8); –red channel OUT_vid_data(15 downto 8) <= grayscale(15 downto 8); –green channel OUT_vid_data(7 downto 0) <= grayscale(15 downto 8); –blue channel Vivado shows there are errors for: grayscale <= ((X"4C" * red) + (X"97" * green) + (X"1C" * blue)); –weighted color part. errors: find '0' definition for operator "*", cannot determine exact overloaded matching definition for "*"; find '0' definition for operator "+", cannot determine exact overloaded matching definition for "+". Could you help me to see what wrong for the Vhdl code?
Hello, it seems that the math library is missing.
Can you try with:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
at the beginning
Regards. Alberto
Hi Ben Ma,
Have u solve this problem?
Hello , I am a researcher(south korea) and a begginer with video processing on FPGA. Your Mis Circuitos give me many godd information. I tested your “Video Processing on the FPGA of a Zybo using VHDL” with Gopro. but I continuously failed to test “HDMI(input) – VGA(output)” on Vivado 2016.4 I think cause resolution in the default EDID only support 3 resolutions (https://forum.digilentinc.com/topic/991-zybo-hdmi-sink/) I test again with modifing the file dgl_dvi_edid.txt in DVI-to-RGB (Sink) 1.6 IP. but failed like link https://forum.digilentinc.com/topic/3483-issues-with-dvi2rgbedid-files-on-zybo/ How can I set EDID resoultion in in DVI-to-RGB (Sink) 1.6 IP? If you share your “Video Processing on the FPGA of a Zybo using VHDL” code(included XDC, video library IP, Project file..), It is best way to resolve this problem. I am trying to test this during 2 months. Please help me~ Issues with dvi2rgbEDID files on Zybo (https://forum.digilentinc.com/topic/3483-issues-with-dvi2rgbedid-files-on-zybo/)
Hello ShiHyun,
the versions, and updates of the IPs can cause that codes can be obsolete and need some update. I dindt try this project for a year.
I will try to find if I have my original project and share with you.
I hope you the best with your projects!
best Regards. Alberto
Thanks for your comment. No, I develop it by myself…
Regards
Hello Mr.Alberto,
Is it possible to develop this project using Verilog code instead of VHDL code ? If it is then, will you provide me the code for this.
hello Mamatha,
Yes it is of course possible, but I didnt do, so I dont have the verilog code. But it should be easy to “translate”
Regards
Looking forward to reading more. Great blog. Will read on…
Thank you ever so for you blog post. Much obliged.
Great article. Keep writing.
Thank you for your post. Cool.
Thanks Towing
does this really work without using axi-stream protocol? i followed step by step but it didnt show any kind of output.
Hello Rojalin,
It was working fine with the code and steps I copied. It could be possible that with new Vivado or new updates you should modify something. But essence is equal.
Best Regards!
Hello Rojalin, It worked to me and I copied the code step by step. It may depends on many factors or the AXI version, etc etc
Very helpful post! I just got this working on a Zybo Z7-20 with HDMI out. I started with the Zybo_EV_Platform project from the MicroZedChronicles Hackster repo: https://github.com/ATaylorCEngFIET/Hackster
…upgraded to Vivado 2018.3, downloaded and referenced the Digilent IP repo, and then plugged in the image processing module as described in this post (though implemented in Verilog)
Are there any good examples/places to look for other video processing/filtering algorithms that you can recommend? Thanks!
Thanks Paul for your feedback and glad that it is working for you! 🙂
I dont know right now any examples or where to find them, sorry!
Best Regards
Keep up the good work! Thanks.
Respected Sir,
I want codes and related material for the demo shown by you with title “Video Processing on the FPGA of a Zybo using VHDL”. It will be great help by you if you can provide me the codes for the demo.
Thanking you.
Hello,
All the materials and codes I put it online on this post. Maybe due new versions of vivado or Zybo drivers, they may be changed or modified to adapt to your needs.
Hope the best.
Alberto
Dear Alberto!
I am Nguyen from Hanoi , Vietnam.
I am a beginner of video processing also of FPGA but I am interested in your HP and every thing you are sharing.
Now I have a Pynq-z1 board, which can uses python.
Could you please make some tutorials of video processing for this Pynq-z1 board?
Thank you.
Hello Nguyen,
I don’t have a Pynq-z1 board and also I am working now in other projects… so I don’t have the time to write and go deeper about video processing. I hope this post is enough to start and then you can enlarge your knowledge somewhere else 🙂
Thanks for contacting.
Best Regards
Alberto
This post is really good. Thanks for sharing such information about Zybo and Video processing 🙂
Hi, I am about to use the exact same board you showed off for an EE course required for my major. I was wondering if their was a way to easily port code from the Cyclone V to make it work with the Zybo board. I am specifically interesting in hoping to get the MiSTer Project running on it.
Hello
Really great work indeed. I am working on a project to implement a computer vision fire detection system on FPGA using VIVADO VHDL.
I want to do Background Subtraction, RGB2Gray conversion, edge/corner detection to detect presence of fire in a video scene. Please I need support in VHDL coding using xilinx Vivado.
Best regards