Development of remote robot monitoring system

With the rapid development of computer networks and the gradual promotion of robot applications, real-time control and monitoring of remote robots has attracted more and more attention. The national 863 project “Research on Robot Teleoperation Using Remote Network Technology” is underway in this context.
1 Overall structure and module division
The monitoring of the remote robot through the Internet network means that the user sends a control command to the remote robot locally, and the control system of the remote robot interprets and executes the control command, thereby controlling the robot to complete the corresponding action. At the same time, the image of the scene is collected by the remote camera. Since the amount of video image information is very large, it is impossible to transmit in real time on the Internet. Therefore, the image information must be compressed first and then transmitted to the user site through the network. The user site decompresses the received compressed image information and restores it to the original image. Subsequently, the user can issue a new control command to the remote robot according to the overall scheduler, and the remote robot completes the corresponding action. Similarly, the robot's action is transmitted to the user's site via the network. Therefore, every move of the remote robot can be presented in front of the user in time, which is the so-called sense of presence.
In order to record both the control commands and the motion process, and to provide accurate data for future analysis and research, it is necessary to establish a control command and a motion process database. According to this assumption, the overall structure of the system is shown in Figure 1.
The system uses a client/server architecture. For control commands, the client accepts user-supplied robot control commands and then forms the corresponding command frame format for transmission to the network. The server analyzes and interprets the received control commands and drives the robot to execute the corresponding commands through the robot's control system. For video images, the video image is first captured by the client, then compressed, and sent to the network. The server combines the image data received from the network to complete the corresponding decompression work and restore the original appearance of the video image.
It can be seen from the working process of the whole system that from the perspective of the processing of the control commands, the local is the client and the remote is the server, and from the perspective of processing the video image, the far end is the client. Local is the server. That is to say, the client/server structure is not divided from physical local or remote, but is divided by logically different functions. The reason for this is that in this system, the processing and transmission process of video images is relatively independent, rather than simple response and feedback information of control commands. In summary, the client actively sends out various information, and the server passively receives the information from the client and processes it accordingly.
2 Basic process of video image compression transmission
The system first builds the get window and then specifies the callback function. The images acquired by the system are stored in a contiguous memory and transmitted to the programmer in the form of a callback function. In the callback function, compression processing is first performed, then the big data blocks are packed, and then the data packets are sequentially sent to the Internet network by serial number. After receiving the data packets, the local site combines the data blocks into sequence blocks, decompresses them, and finally reproduces the video images on a given window. The video image compression transmission process is shown in Figure 2.
We use a hybrid coding scheme, and the basic flow of video image compression is shown in Figure 3. Firstly, it is judged whether it is a key frame. If it is a key frame, the discrete cosine transform DCT (Discrete Cosine Transform î—¥) is first performed, then the DCT coefficients are quantized, and the quantized alternating current (AC) coefficients are subjected to run length encoding by a Z-shaped path. (Run-Length Encoding î—¥, finally Huffman encoding; if it is not a key frame, then using interframe compression.
For interframe compression, we compare two different ways.
The first method is based on pixels, which first differs from the previous frame to obtain a sparse matrix. In the process of making a difference, a small range matching method is used to remove a part of the noise, and then the improved run length coding is used to obtain the final result. The current frame image is saved in the specified memory area as a reference frame for the next frame.
The second method is a macroblock-based motion compensation method, which first calculates the motion vector and then uses the run length encoding RLE and Huffman coding. Since the motion of the robot is mainly translation and rotation, and there is no local slight change, motion compensation can achieve higher compression ratio and better image quality.
Since the data transmission rate of the channel is not fixed when transmitting with the remote site through the Internet, the channel test feedback information is used in the system to change the step size during quantization, thereby adjusting the digital rate of the video information, so as to better Adapt to changes in channel transmission rate.
3 Experimental data and performance analysis
We compared several different compression algorithms. In order to access the data transmission rate of the Internet, the data traffic used is 10 kb/s. The specific experimental data is shown in Table 1.
From the above data, it can be drawn that the curve of the frame rate and the compression ratio as a function of the image format when the IC Compressor is used for constant flow compression is as shown in FIG. 4.
As can be seen from the curve in the figure, as the image becomes larger, the compression ratio is greatly increased, and the frame rate is decreased. This is because, when the image becomes larger, the background becomes larger, and in the case where the background is almost unchanged, the compression ratio of the interframe compression is considerably large, so that the total compression ratio is significantly improved; as the image increases The amount of data will increase exponentially. Under certain conditions of data transmission rate, the frame rate will inevitably decrease.
Using constant mass (Fig. 5, the frame rate and compression ratio vary with mass when compressing.)
As can be seen from the above curve, as the image quality requirements are increased, both the frame rate and the compression ratio are declining. This is because, for a specific compression algorithm, to improve the image quality, it is necessary to increase the amount of compressed data, and the compression ratio naturally decreases; and under the condition that the network data transmission rate is constant, the time taken to transmit one frame of image data is Will increase, so the number of frames that can be transmitted per unit time must be reduced.
For the first hybrid coding method of the experiment, the result is obviously better than the above method, and the frame frequency and compression ratio are obviously improved under the condition of certain image quality.
For the second hybrid coding method adopted, a higher compression ratio, better image quality, and a substantially real-time video effect are achieved in the case of a lower data transmission rate.
The frame rate and compression ratio given in the experimental data are calculated as follows: Record the start and end of image acquisition, count the number of frames during playback, and divide the frame number by the time difference to be the frame rate:
Frame rate = number of frames / (end time - start time)
The number of bytes that should be transmitted by the image information divided by the number of bytes actually transferred is the compression ratio:
Compression ratio = number of bytes that should be transmitted / number of bytes actually transferred


:
0 times
Window._bd_share_config = { "common": { "bdSnsKey": {}, "bdText": "", "bdMini": "2", "bdMiniList": false, "bdPic": "", "bdStyle": " 0", "bdSize": "24" }, "share": {}, "image": { "viewList": ["qzone", "tsina", "tqq", "renren", "weixin"], "viewText": "Share to:", "viewSize": "16" }, "selectShare": { "bdContainerClass": null, "bdSelectMiniList": ["qzone", "tsina", "tqq", "renren" , "weixin"] } }; with (document) 0[(getElementsByTagName('head')[0] || body).appendChild(createElement('script')).src = 'http://bdimg.share. Baidu.com/static/api/js/share.js?v=89860593.js?cdnversion=' + ~(-new Date() / 36e5)];

Sauna Lighting

DSPOF produce differnt of Fiber Optic Light,alos include the Sauna lighting kits.

The Fiber Optic Sauna Ceiling light are safe for use in wet and humid areas as no heat or electricity is present in the fibre optics or fittings, but the illuminator which contains the electricity, must be located outside the main sauna or steam room.

Sauna lighting is something of a specialist field. This is because the heat of the sauna will cause most traditional light sources to fail. To illuminate a sauna safely and effectively, one must either use a fibre-optic lighting system, or use specialist sauna LED's. Furthermore, getting the light levels and in turn the ambiance in the sauna right, is crucial to the enjoyment of the sauna itself.


Sauna Ceiling Light,Sauna Lighting Kits,Sauna Room Kit,Sauna Light Kit

Jiangxi Daishing POF Co.,Ltd , https://www.opticfibrelight.com

This entry was posted in on