Student PROCEEDINGS OF
Research
FEEE Conference 2014
Ho Chi Minh city, January 18, 2014
Faculty of Electrical and Electronics Engineering Ho Chi Minh city ci ty University Univer sity of Technology
Proce Pr ocee edin di n gs of
FEEE STUDENT RESEARCH CONFERENCE 2014
January 18, 2014, Ho Chi Minh City, Vietnam
Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
TABLE OF CONTENTS Message from the FEEE Dean .................................................................. ................................................................................................... ...................................... .....i
ORAL SESSION A
3-D Positioning using Stereo Camera ................................................................................................ 2 Hai-Ho Dac, Quang-Anh Quang-Anh Le Indoorr Mobi Indoo Mobile le Navi Navigatio gation n using ROS ............................ ............................................................. ................................................................... ..................................8 Thien-Minh Nguyen-Pham Nguyen-Pham Controlling an Inverted Pendulum by a Regulator LQR with Feedback Information from Camera.... 16 Tan-Khoa Mai, Duy-Thanh Dang Design the Optimal Optimal Robust Robust PID Controlle Controllerr for a Ball and Beam System ....... ............... ............... .............. .............. ........... .... 22 Nguyen Quang Chanh, Do Cong Pham Pham Autopilot Multicopter using Embedded Embedded Image Processing System: Design and Implementation Implementation ...... 28 Gia-Bao Nguyen Vu, Dang-Khoa Phan Design Swing-Up and Balance Controllers for a Pendubot .............................................................. 34 Van-Khoa Le
ORAL SESSION B
Design Video Door Bell Systems .................................................................................................... 42 Van-Thang Vuong Application of Wireless Wireless Sensor Sensor Network Network And TCP TCP Socket Socket Server Server in Smart Smart Home ....... .............. .............. ........... .... 48 Thanh-Tan Pham, Nhut-Huy O, Hoang-Phi Le-Nguyen Display video on Led Matrix RGB 64x128 using kit kit De0-nano and BeagleBone Black ....... .............. ............ ..... 52 Thanh-Phong Do I/O Minimizing Minimizing by Multiplexing Touch Touch Feedback Feedback on Capacitive Capacitive Sensor Sensor ....... .............. ............... ............... .............. ............ ..... 58 Tuan-Vu Ho Moving Object Detection in Traffic Scene ...................................................................................... 62 Thanh-Hue Nguyen-Thi
Object Surface Reconstruction ........................................................................................................ 68 Thanh-Hai Tran-Truong ORAL SESSION C
Modelling and Designing PID Controller for BLDC Motor ............................................................. 74 Quang-Vu Nong, Anh-Quan Nguyen Research on Application of Singer Wire Earth Return Distribution Systems in Vietnam ................. 80 Duc-Toan Nguyen, Huu-Thanh Nguyen Power Quality Analysis for Distribution Systems in Ho Chi Minh City ........................................... 84 Minh-Khanh Lam, Dinh-Truc Pham, Huu-Phuc Nguyen An Approach Designing SCADA Developer with Kernel Structure and XML Technology on iOS ........ 90 Pham Hoang Hai Quan, Nguyen Van Phu, Le Hong Hai, Truong Dinh Chau Design a Self-Tuning-Regulator for DC Motor's Velocity and Position Control .............................. 96 Tan-Khoa Nguyen
POSTER SESSION
A Minutiae-Based Matching Algorithm in Fingerprint Recognition System ...................................104 Hai Bui-Thanh, Hong-Nhat Thai-Xuan High-speed Moving Object Tracking System for Inverted Pendulum .............................................108 Hong-Hiep Nghiem 3-D Mouse using Inertial Measurement Unit Sensor ......................................................................112 An Nguyen Design and Implementation of Fuzzy-PID Controller for DC Motor Speed Control ....................114 Khanh-Cuong Mai-Manh, Thai-Cong Pham WiFi Controlled Tracked-Car .......................................................................................................118 Huynh Trung Bac, Tran Le Duc, Truong Nguyen Minh Trung Design and Implementation of Music-Glove ..................................................................................122 Quoc-Duong Giang-Hoang Neural-Network Control for a Mobile-robot ...................................................................................128 Thanh-Hoan Nguyen A Three-phase Grid-connected Photovoltaic System with Power Factor Regulation .....................134 Tien-Manh Nguyen, Minh-Huy Nguyen, Minh-Phuong Le
A Modified Flood Fill Algorithm for Multi-destination Maze Solving Problem ..............................140 Dinh-Huan Nguyen, Hong-Hiep Nghiem SMS Registration using Digi Connect WAN Via TCP Socket ........................................................144 Pham Ngoc Hoa, Truong Thanh Hien
GENERAL PROGRAM DATE : JANUARY 18
TH
, 2014
VENUE: HO CHI MINH CITY UNIVERSITY OF T ECHNOLOGY TIME
ARRANGEMENT
VENUE
12:00 – 13:00
Registration
13:00 – 13:10
Plenary session
13:10 – 14:10
Keynote Speeches
14:10 – 14:20
Coffee break
B1 Ground Floor
14:20 – 15:00
Poster and Exhibition Sessions
B1 Ground Floor
15:00 – 17:00
17:00 – 17:30 18:00-21:00
306B1 306B1
Oral Session A
308B1
Oral Session B
309B1
Oral Session C
104B1
Closing ceremony & Awards Announcement
104B1
Dinner
B1 Ground Floor
TECHNICAL PROGRAM K EYNOTE SPEECHES th
13:10 – 14:10, January 18 , 2014 Room 306B1 TIME K EYNOTE 13:10 – 13:30 Keynote 1 13:30 – 13:50 Keynote 2 13:50 – 14:10 Keynote 3
Time: Venue:
S PEAKER Dr. Nguyen Quang Nam Dr. Huynh Phu Minh Cuong Dr. Nguyen Vinh Hao
POSTER SESSION t
14:20 – 15:00, January 18 , 2014 MEng. Ho Thanh Phuong Chair: B1 Ground Floor BEng. Nguyen Tan Sy Co-chair: I NDEX P APER TITLE AUTHOR A Minutiae-Based Matching Algorithm in Hai Bui Thanh 1 Fingerprint Recognition System Nhat Thai Xuan Hong (Paper ID: 24) High-speed Moving Object Tracking System 2 Hiep Nghiem for Inverted Pendulum (Paper ID: 20) 3-D Mouse using Inertial Measurement Unit 3 An Nguyen Sensor (Paper ID: 22)
Time: Venue:
4 5 6 7 8
9 10
Design and Implementation of Fuzzy-PID Mai Khanh Cuong Controller for DC Motor Speed Control Cong Pham (Paper ID: 09) WiFi Controlled Tracked-Car Duc Tran, Huynh Bac (Paper ID: 12) Design and Implementation of Music-Glove Giang Hoang Quoc Duong (Paper ID: 04) Neural-Network Control for a Mobile-robot Hoan Nguyen (Paper ID: 26) A Three-Phase Grid-Connected Photovoltaic Manh Nguyen, Huy Nguyen, System With Power Factor Regulation Phuong Le (Paper ID: 25) A Modified Flood Fill Algorithm for MultiHuan Dinh Nguyen, destination Maze Solving Problem Hiep Nghiem (Paper ID: 28) SMS Registration using Digi Connect WAN Pham Ngoc Hoa, via TCP Socket (Paper ID: 27) Truong Thanh Hien
ORAL SESSION A t
Time: 15:00 – 17:00, January 18 , 2014 Venue: Room 308B1
TIME 15:00 – 15:20 15:20 – 15:40 15:40 – 16:00 16:00 – 16:20 16:20 – 16:40 16:40 – 17:00
Chair: Co-chair:
P APER TITLE 3-D Positioning using Stereo Camera (Paper ID: 21) Indoor Mobile Navigation using ROS (Paper ID: 8) Controlling an Inverted Pendulum by a Regulator LQR with Feedback Information from Camera (Paper ID: 23) Design the Optimal Robust PID Controller for a Ball and Beam System (Paper ID: 16)
Dr. Nguyen Vinh Hao Dr. Nguyen Trong Tai AUTHOR Hai Ho Dac Thien-Minh Nguyen-Pham Khoa Mai, Thanh Dang Do Cong Pham Nguyen Quang Chanh
Autopilot Multicopter using Embedded Image Processing System: Design and Khoa Phan, Gia-Bao Nguyen Implementation (Paper ID: 24) Design Swing-Up and Balance Controllers Khoa Le for a Pendubot (Paper ID: 7)
ORAL SESSION B t
Time: 15:00 – 17:00, January 18 , 2014 Venue: Room 309B1
TIME 15:00 – 15:20 15:20 – 15:40 15:40 – 16:00
Chair: Co-chair:
Dr. Vo Que Son Dr. Che Viet Nhat Anh AUTHOR
P APER TITLE Design Video Door Bell Systems Thang Vuong (Paper ID: 18) Application of Wireless Sensor Network Tan Pham, Phi Le Nguyen, And TCP Socket Server in Smart Home Huy O (Paper ID: 19) Display video on Led Matrix RGB 64x128 Phong Do
16:00 – 16:20 16:20 – 16:40 16:40 – 17:00
using kit De0-nano and BeagleBone Black (Paper ID: 14) I/O Minimizing by Multiplexing Touch Feedback on Capacitive Sensor (Paper ID: 17) Moving Object Detection in Traffic Scene (Paper ID: 10) Object Surface Reconstruction (Paper ID: 5)
Ho Tuan Vu Hue Nguyen Tran Hai
ORAL SESSION C t
15:00 – 17:00, January 18 , 2014 Dr. Ho Pham Huy Anh Chair: Room 104B1 Dr. Pham Dinh Truc Co-chair: TIME P APER TITLE AUTHOR Modelling and Designing PID Controller for Vu Nong Quang 15:00 – 15:20 BLDC Motor (Paper ID: 6) Quan Nguyen Anh Research on Application of Singer Wire Nguyen Huu Thanh 15:20 – 15:40 Earth Return Distribution Systems in Nguyen Duc Toan Vietnam (Paper ID: 11) Minh-Khanh Lam Power Quality Analysis for Distribution 15:40 – 16:00 Dinh-Truc Pham Systems in Ho Chi Minh City (Paper ID: 15) Huu-Phuc Nguyen An Approach Designing SCADA Developer Quan Pham 16:00 – 16:20 with Kernel Structure and XML Technology Phu Nguyen, Hai Le on iOS (Paper ID: 13) Design a Self-Tuning-Regulator for DC 16:20 – 16:40 Motor's Velocity and Position Control Khoa Nguyen Tan (Paper ID: 3)
Time: Venue:
MESSAGE FROM THE FEEE DEAN Dear all participants,
It is our great pleasure and honor to warmly welcome you to the 2014 FEEE Student Research Conference (FEEE-SRC 2014) organized firstly by the PayIt-Forward club - a research club for student in the Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology, held on January 18, 2014.
The 2014 FEEE Student Research Conference provides an excellent forum in electrical and electronics engineering for sharing knowledge, encouraging creativity and scientific research among students and creating an environment to exchange, learn and share experiences and develop their talents, in three tracks of Electronics and Telecommunications Engineering, Automation and Control, and Electrical Engineering. The conference provides an opportunity for students to improve communication skill and English skill, as well.
For the conference we have assembled 3 keynote speeches, high quality technical sessions including 26 papers from junior and senior students, as well as an exhibition of 20 excellent research projects.
With your active participation, we are confident that the 2014 FEEE Student Research Conference will be successful as a major event for students in the areas of electrical and electronics engineering. On behalf of the conference organization committee, I would like to express our sincere thanks to the faculty members for their valuable support to the conference, as well as thanks to members of organization committee and members of the Pay-ItForward club for their very hard working to make the success of the conference.
Dr.-Ing. Do Hong Tuan Dean, Faculty of Electrical and Electronics Engineering
ORAL SESSION A
3-D Positioning using Stereo Camera Dac-Hai Ho
Quang-Anh Le
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Abstract — This paper presents a standalone robot vision system, a stereo camera device, which is used for studying and applying localization and navigation technique in robotics and autonomous vehicles. By analyzing visual image captured from the stereo camera, motion estimation is computed to record and redraw its trajectory. Keywords — stereo image processing; robot vision system; motion estim ation; simul taneous localization and mapping
I.
I NTRODUCTION
Trajectory estimation and object tracking nowadays has so much potential in reality implementations. Many works have been experimented to achieve accurate trajectory. In comparison to other measuring techniques such as using encoder, GPS, laser scan… though considered as a new approach, visual technology has demonstrated a dramatic development in recent years. Being aware of the importance of this application in artificial intelligence and high-tech products, we decided to carry out a research in computer vision.
II.
HARDWARE MODELING
A. Overview For a practical experimental model, basically, there are two camera modules, a FPGA board, an ARM board and some external actuators and communication devices. Two cameras are connected to FPGA board. FPGA board can configure and interface with cameras and as well as ARM board. ARM board acts as a main processor of the whole system which can give commands to FPGA board, communicate with PC and process all the data. Besides, FPGA board and ARM board are functional to be programmed to communicate with external devices. For a standalone system, ARM board is programmed to interface with higher level of administrator like PC. PC can login into ARM board to get data, give command and control the whole stereo system. Power source is a matter to be concerned for a standalone system so a lifelong battery and a regulator power circuit are also necessary.
In previous works, stereo camera is connected to a computer as laptop because it can provide sufficient computing power. Somehow, this setup makes the system looks big and overwhelming. However, the power of computer still proves irreplaceable for a fast speed system. Some practical experiments had been done to test the system in comparison to other measuring techniques. In Vietnam, there were just a few researches and applications related to stereo image processing as well as applying visual technology into trajectory tracking and navigation. This paper is about a new approaching technique in visual tracking. In order to build a standalone system which is easy to implement and connect to other system, a FPGA core and an ARM core were implemented in this project. The FPGA core is used to capture stereo images and the ARM processing unit is used to handle all the algorithms and controls. FPGA advances in parallel processing which is suitable for handling and capturing stereo images at the same time but it has weakness in CPU frequency for high-speed processing. ARM advances in serial processing with high speed frequency processing but it causes a delay time in parallel processes. Therefore, to eliminate weaknesses of FPGA and ARM, we combined them as one system to take full advantages of each core. Furthermore, system can be further developed in the future as both the ARM board and FPGA board is so functional and powerful with many external IOs.
Fig. 1. Hardware modeling For a simulation experiment model, there are only two webcams and a laptop in need. Algorithms can be programmed and developed directly on laptop which helps users more visually aware of how the system works and debugs. B. FPGA Configurations This project used two TRDB-D5M cameras comes along with the FPGA DE2-115 board. Camera parameters can easily be configured by I2C protocol. Bayer color filter was used to capture RGB raw images then a small transformation was carried out to convert RGB images to gray scale images. Firstly, gray scale images will be stored in DDRAM then they can be stored in SDCard or transmitted directly to ARM board or to monitor through VGA output. This process needs a little help of a virtual NIOS core created within FPGA core. NIOS core will handle storing images progression on SDCard
and manage image address in DDRAM which can give ARM core an access into DDRAM on FPGA to get images. Secondly, FPGA board is configured a communication interface with ARM board. This procedure is quite complicated as it depends on the hardware itself. All the hardware configurations as well as applications on FPGA board were developed and programmed by Quartus II and Nios II software.
III.
VISUAL ODOMETRY
Visual odometry is a new way of practical localization and navigation technique used in robotics or autonomous vehicles. By analyzing sequence image data captured from stereo cameras, we can extract information to localize the system. In modern robotics nowadays, this technology is commonly used to increase the artificial intelligence of robots or any smart system integrations.
Fig. 2. FPGA configuration for capturing stereo images C. ARM Configurations As a main processor of the system, ARM board is configured and programmed a bit more complicated rather than FPGA board. First of all, an embedded operating system is downloaded on ARM board. In this case, Linux is a free and open source with a various versions which is an ideal choice for the system. Moreover, Linux community is quite enormous so it is easy to get supports from people around the globe. Another thing needs to be taken into consideration is that the hardware configuration of ARM board usually goes with its appropriate version of Linux. It is because the producer had already declared hardware configurations (as known as drivers) on their Linux version. Moving to another point, the original Linux version needs to be updated and upgraded so it is important to get these things work before going to further steps. By the way, installing necessary packages is as important as getting the system updated because the embedded OpenCV library which is about to be installed on ARM board will need these packages to get it work properly. Nevertheless, OpenCV is quite hard to be compiled and installed on an embedded system as it uses different compilers in comparing with Linux running on PC. To get external IOs work, users should know a little bit about device tree and overlay knowledge. It is more like a hardware configuration layer covering on the board to get all the external hardware function work. Finally, users can develop their own applications on their hardware. This work needs a lot of testing and experiments to get the final application works correctly. Besides, all the single modules will also be checked carefully among all the device connections.
Fig. 3. ARM configuration
Fig. 4. Visual odometry algorithm overview [1] A. Stereo Camera Stereo camera is simply a device made of two cameras. Camera can be a webcam or a camera module. So how is a stereo camera that meets specification requirements? There are many specification criteria such as frame rate, resolution, baseline, communication protocol, type of output image… In general, some hardware specifications need to be taken into consideration such as baseline, field of view (FOV), image sensor specifications (size of sensor) and focal length. Besides, some aspects to be concerned are output image resolution, interface protocol, frame rate, type of compression and
decompression supported, hardware compatible…
Camera selection is very important as it affects directly to the quality of the system. As regards to recommendations, higher resolution and higher frame rate are definitely better. Moreover, hardware compatible and connection type are issues to be taken care of because it is related to the input of the boards we are using. Baseline is also a curious specification as there is no rule for it. It really depends on how far your view would be so the farther the view the wider the baseline. B. Image Rectification Qualified camera gives quality output image. Input images captured from camera are called raw images and they cannot be putted into processing at once. They need to be modified over a few steps before getting a reliable usable pair of stereo images.
Fig. 5. Key points projected on image planes [2]
So, here comes the necessary of pre-processing and post processing procedures. For the pre-processing procedure, stereo camera needs to be calibrated to get its hardware specifications. This procedure is very important as it would return valuable parameters for a precise calculation in upcoming steps. A chessboard was used as a calibration tool. Chessboard corners are key points and examined factors for the calibration procedure. The more pictures of chessboard we took, the more precise in parameter values returned by the calculation. OpenCV provides stereoCalibrate function [3] which returns camera parameters, distortion coefficients and relationship of rotation matrix and translation vector between st nd 1 and 2 camera coordinate systems. All the computed parameters will be used to adjust the distortion and rectify images to get final reliable useful stereo images.
Fig. 8. Corner detectors compare in number features found [5] In this work, SURF and FAST feature detectors were examined to identify which function is better and faster than others. SURF FAST Feature Points 400 ~ 1000 5000 ~ 10000 Processing Time ~70ms ~ 5ms SURF and FAST detectors comparison These are very important factors for a real-time standalone application and embedded devices. Let’s see how they affects to the whole processing time in the Experimental Setup and Result section. D. Disparity and Triangulating 3-D points
Fig. 6. Stereo image rectification process [4] C. Feature detection For feature detection process, there are many feature
detectors to find edge, corner, blob… In this paper, corner detectors were needed because the purpose of the project is to base on movements of these key points in sequenced images to estimate distance among objects in environment and redraw system’s trajectory. Thanks to OpenCV for providing useful feature detection functions. A simple comparison was made to check the quality of each corner feature detectors supported by OpenCV [5]
Fig. 7. Corner detectors compare in average detection time [5]
Having had all the key points presented on left and right screens, they will be matched coordinately to calculate their disparity. For the matching procedure, there are two ways to do this which are BruteForceMatcher function and FLANNBasedMatcher function [6]. The quality of two functions is the same but as regards to properties and computing methods, processing time of each method experiments a vast difference. BruteForceMatcher is a simple and basic matching function which analyzes descriptor of key st point in the 1 image to find the approximately closest descriptor of each key point in the 2nd image. It takes comparison from pairs to pairs which are really a serious time consuming process. On the other hand, FLANNBasedMatcher performs a quick and efficient matching by using Fast Approximate Nearest Neighbor Search Library. This function is optimized to work with a huge and multi-dimensions database which sounds like an ideal matcher that we are looking for.
Fig. 9. Triangulating 3-D points [4]
Each feature points will have different position on the l r image planes as presented as x , x on left and right image plane. Based on the difference on these two positions, disparity value is calculated by the following equation:
Where:
(1)
xl, xr is coordinate position of object projected on image plane in pixel B is a baseline in meter f is a focal length in pixel d is disparity in pixel
Fig. 11. Estimated position of tracked features [2]
From the above equation, focal length of camera is in pixel unit so a small transformation was made to convert focal length in meter into pixel.
( ) ( ) ( ) [][]
[]
( ) ()
(3)
From now on, 3-D position of key points can be calculated easily according to similar triangles in Fig. 9 and return final equations:
To calculate the movement of a point cloud, a minimum of three points and their positions before and after the move is required. We now formalize the idea of movement as follows
⃗ ⃗
(2)
()
F. Motion Estimation using RANSAC & SVD algorithm
(5)
where pi is any point from the point cloud in the past frame, xi is its corresponding point from the point cloud in the current frame, R is a rotation matrix, and t is a translation vector. To obtain the new location xi of point pi, location is rotated about some points by R and then translated by t.
(4)
Shortly, depth of a point in reality is a reverse of the variance between corresponding points in left and right image comparing to center of the camera. Another way of saying is that the smaller the disparity value is, the farther the Z position value of the point will be correspondingly.
Fig. 12. Sample movement estimation calculation [1]
Assuming an ideal point set, Besl’s method [8] would be solution to the motion problem: pick any three points, and calculate R and t. However, the data is highly error-prone Fig. 10. Distance is reversely proportional with disparity [4] E. Feature Tracking In the feature tracking algorithm, calcOpticalFlowPyrLK function in OpenCV [3] was used to track the 3-D feature points above. We assumed that previous image and present image stood still in a very short transition time, so we can track how these 3-D points drifted between 2 frames based on the difference in the complex optical flow. In computer vision, this method is commonly used in tracking and estimating motion of points and objects. This method is less sensitive with light distortion rather than other method which is a plus criterion for users to put this method onto their project.
(especially along the dimension of the camera’s optical ray), so
just using this method would result in gross error that would render the implementation unusable. Therefore, RANSAC algorithm is introduced, a random-sample consensus algorithm, is able to eliminate gross outliers and perform least-squares estimation on the valid data points. RANAC is immune to gross outliers (also known as poisoned data points). Applied to this particular problem, the RANSAC algorithm can be presented as the following sequence of steps [1]: 1. Pick three points and use the 3-point problem solution presented above to calculate the R matrix and t vector.
2. Apply R and t to the entire past point cloud. If the transformation is perfect, the two sets should now overlap completely. That will not be the case, so we identify the points that are within a distance e of their positions in the current point cloud, and call them the support set for this particular hypothesis. a. If the support set has t or more members, then we call it the optimal set and move to step 3. b. If the support set has less than t members, we go back to step 1 and pick another hypothesis. We repeat this up to k times. If we cannot find a hypothesis with more than t members, then we pick the hypothesis with the largest support set to be optimal. 3. Once the optimal hypothesis is determined, we re-solve the model with the entire support set for that hypothesis. If we have more than 3 points in the support set, the system will be over-constrained, and we use a least-squares technique (described later) to come up with the polished Rand t.
Fig. 15. FPGA DE2-115 and stereo camera TRDB-D5M Process for implementing the application on BBB can be described as following steps
In the last step of RANSAC algorithm, a least-squares solution which is a singular-value decomposition (SVD) method is found to an over-constrained system of equations.
Fig. 16. Application process development Applications will be programmed and developed by Visual Studio on Window operating system. Some codes were developed by MATLAB. Then, applications will be modified to adapt to Linux operating system environment and embedded Linux operating system on BBB after that. On the other hand, applications can be developed directly on Linux as well as embedded Linux.
Fig. 13. SVD implementation [1] IV.
EXPERIMENTAL SETUP AND R ESULTS
In the simulation model, 2 standard 5MP webcams are connected to laptop to develop the whole stereo image processing from the raw images to the system localization.
Data after being transmitted to BeagleBone Black (BBB) board will be calculated and redraw trajectory of the system. Besides, PC can connect to BBB to control all the process, give command and read data. BBB and DE2-115 board have many external I/O functions, so they can be expanded and developed
to control motors, LCD, sensors, web server…
Calibration procedure captured 15 pairs of chessboard stereo images from different angles and result:
Fig. 14. Simulation stereo camera model The hardware model was made of two TRDB-D5M cameras connected to ALTERA FPGA Board DE2-115 through GPIO-HSMC External Card THDB-H2G. FPGA is programmed in Verilog language and configured to capture and save stereo images on DDRAM as well as SDCard at the same time. Frame rate performs at 5 frames per second. Resolution was 640x480 pixels. So far, captured images were store in SDCard and they would be transmitted to ARM board later. At present, the system cannot process stereo images stream online.
Fig. 17. Stereo camera calibration result To test the algorithms on computer, a sample stereo image database was used to test the system. The database belongs to Bumblebee stereo camera of Pointgrey Company. Trajectory was recorded in meter unit as all the computed 3-D points were in meter unit.
In the near future, the system will be improved by undertaking more practical experiments and optimizing algorithms. Some comparisons with other type of
measurements such as encoders, GPS, laser scanner… would
be made to test the quality of the system. Furthermore, combination among these methods would be made to improve the result as well. By taking advantage of multi-functional and various external I/O pins, both FPGA and ARM have its own potential for users to implement or develop their own system. ACKNOWLEDGMENT
Fig. 18. Algorithm procedure First of all, feature key points found by SURF detector were matched correspondingly by FLANNBasedMatcher function and triangulated 3-D position. After having 3-D position of key points at time t, these key points were tracked in separated sequence of images in each camera to check how these points drifted in frame captured at time (t+1) by using calcOpticalFlowPyrLK function. At this time, it is capable to triangulate 3-D position of key points at time (t+1). As result, all the 3-D key points were found in previous stereo images and present stereo images which were input parameters for running the RANSAC algorithm. Trajectory record can be displayed on Google map according to appropriate system ratio.
Fig. 19. Trajectory record V.
CONCLUSION AND FUTURE WORKS
The final results recorded were not as good as expectation. Currently, both hardware and software have errors found and being checked all the procedures from the beginning till the end as soon as possible. Moreover, an important issue needs to be solved is the communication interface between FPGA and ARM boards. Hardware replacement would be made if necessary, in case the problem cannot be solved at once.
Thanks for the sponsor and financial support of the honor program funds of Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology, for students in scientific research in 2013. This work was first th presented in December 19 2013 as a scientific project and was developed to the final thesis presented in January 7th 2014. We would like to acknowledge Mr.Lê Văn Thạnh from class 2008 for providing us sample image database [7] and Dr.Nguyễn Vĩnh Hảo – our supervisor – for enthusiast guidance and support. R EFERENCES [1] [2]
Yavor Georgiev, “E90 Project: Stereo Visual Odometry, ” 2006 Sergio A. Rodriguez F., Vincent Fremont, Philippe Bonnifait, “An Experiment of a 3D Real-Time Robust Visual Odometry for Intelligent Vehicles,” Universite de Technologie de Compiegn, France, Oct 2009
[3]
OpenCV documentation [online] http://opencv.org/documentation.html
[4]
Gary Bradski & Adrian Kaehler, Learning OpenCV. Computer Vision with the OpenCV Library . O’Reilly Media, Inc., 2008
[5]
Levgen Khvedchenia. (January 4 th 2011). Comparison of the OpenCV’s feature detection algorithms [online]. Available: http://computer-visiontalks.com/2011/01/comparison-of-the-opencvs-feature-detectionalgorithms-2/
[6]
Robert Laganière, OpenCV Programming Cookbook .
[7]
Robot pose estimation using Stereo visual odometry – Van-Thanh Le, Tam-Trung Dang, Vinh-Hao Nguyen, Ho Chi Minh City University of Technology, Vietnam, 2012
[8]
Besl, P., McKay, N., "A Method for Registration of 3-D Shapes," IEEE Trans. PAMI, Vol. 14, No. 2, February, 1992, pp. 239-256
2
Computer
Vision
Application
Indoor Mobile Navigation Using ROS Thien-Minh Nhat Nguyen-Pham Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected] Abstract Developing the ability to navigate and travel — Developing through an dynamic environment for a mobile robot is a complex problem. This article presents the implementation of an intelligent indoor mobile robot using several sophisticated navigation algorithms and applications from the Robot Operating System framework. The robot is able to localize itself, update the information of the environment to make a path plan and follow it to reach an assigned goal. This motivates the future research on similar navigation problems for outdoor environment as well as developing several indoor robot applications such as building guide, household assistant or warehouse deliverer.
robot’s robot’s motion motion and i ts position-orientation position-orientation is also observed in a probabilistic probabilistic manner. manner. Thus the error error will permeate permeate in the model but still under control and will preserve the basis for the robot to reconsider the former locations at later stage. This method is based on the Monte Carlo Localization Localization (MCL) algorithm for for robot localization and will be explained more in the next part. In general, we have implemented some interactive methods for mobile navigation in this paper:
Keywords N avigation; SLAM ; M onte Carlo methods methods;; L ase aser — Navigation; beams; beams; Path plan plan ni ng.
I.
I NTRODUCTION NTRODUCTION
Mobile Navigation can be seen as a set of three problems. When a robot is given a goal to reach, it should firstly ask the question where am I? This problem problem is called self-loca called self-localization lization.. The next question is what is the world like? like? or the mapping problem. Finally, when the robot kn ows where it is and what the world is like, it should ask the question how should I travel through this? This is the path-plann the path-planning ing problem. problem. Of three, the two problems of self-localization self-localization and and mapping are often inseparable and relate to each other in a chicken and egg loop. When the robot knows where it is, it will base on this intuition to add the newly detected features to the map and then use this map to know how much displacement it has made since the last update in the n ext loop. These two problems are often grouped together as the Simultaneous Localization and Mapping problem, or or SLAM. SLAM. Previous implementations of autonomous robot [1] and path planner [2] have achieved a level of navigation with the ability to avoid obstacle to reach the assigned goal. However, these approaches used a deterministic model for the localization and mapping process. In this approach, the mapping has to start from scratch and the process only observes the boundaries of the space the robot can travel in along with the assumption that the sensor’s measurement was performed performed with almost absolute accuracy. accuracy. Also, the robot’s self-localization self-localization did not base on the map but the confidence on the robot’s ability to perform accurate motion, motion, which is separated separated into translational translational and in-place rotational moves, moves, thus, the robot has low flexibility and the error in estimation will accumulate through time. In this project, we provided the robot with a prior knowledge, or a static map, about the environment and an initial location inside this map. The relation between the
We provided the robot with a map from an exploration SLAM process and even a simple hand-drawn map. The map needs not to explicitly describe the robot’s robot’s operating environment, but only static features as the landmarks for the robot to localize itself in it. During operation the robot could update the map with new obstacles or free space from its observation. We applied a path planner to come up with an immediate plan to follow and reach the assigned goal. During the navigation process, process, the plan can be adapted to newly acknowledged features in the map. We applied a method to control the robot in a velocityvelocityoriented way rather than a sequence of translation and rotation, thus the robot could move flexibly on a path optimized for its shape and kinematic constraints. II.
ROS N ROS NAVIGATION STACK
A. Overview of ROS Navigation Stack The ROS Navigation Stack is an ROS framework to solve the navigation problem for an indoor robot. The libraries and applications of ROS are supported in packages in packages.. In the scope of an operating system, a task in ROS is implemented by a node and the set of nodes in an application is called the computational graph. The communication between nodes follows the publication/subscrip publication/subscription tion mechanism. The core of Navigation Navigation Stack is a n ode move_base that will update a static map with the data received from the sensor sources. It then combines this map with the kinematic constraints of the robot to calculate an appropriate path to avoid collision with obstacles in the world. This node will then output the velocity commands via the cmd_vel topic to drive the mobile base to follow this path plan. The velocity command in ROS Navigation Navigation Stack is called a twist . It differs from the simple rot-trans-rot procedure (rotate, then translate and then rotate again). The base_controller node is created by user to suit the mobile base in different robot platform. ROS Navigation Stack supports both holonomic and non-holonomic robot. User should consider the format of the cmd_vel message which
contains the linear and angular velocities on each direction of the robot’s coordinate frame to develop a control loop for the driving actuators. An important setup diagram for ROS Navigation Navigation Stack can be found at [3]. [3].
Figure 2. The motion of differential steered mobile.
Let us consider a differential steered mobile platform as shown in Fig. 2. At a time instance the the ideal, noise-free robot Figure 1. Intelligent and interactive approach for mobile navigation using Navigation Stack.
The remaining problem when applying Navigation Stack is localization. To implement the ROS Navigation Stack. One can simply count on the odometry of the robot. Many kinds of sensors can be fused together together to determine the robot’s location with great accuracy. However, as mentioned above, the problem of determining a model is that we can n ever ever have an exact model due to noise and the error will also accumulate through time. The Navigation Stack provides an optional node amcl that implements the Adaptive Monte Carlo Localization (AMCL) algorithm with adjustable parameters for a model of robot moving in a 2D environment with 2D laser scan sensor as the observing source. The MCL is a very popular method for the robot’s global localization. localization. In general, the MCL approach describes the robot motion in a probabilistic manner and takes account of the robot’s motion, motion, the sensor data and a prior knowledge about the environment to find the belief on the current position-orientation of the robot. robot. The adjective adaptive added to the MCL method used in the ROS Navigation Stack relates to an advanced resampling algorithm for the weighted particle filter . The theory to utilize these parameters will be described further in the next part. B. Implementation the ROS Navigation Stack. This part describes the practical work to implement the Navigation Stack on on a mobile robot platform platform as well as the underlying theorem for localization to utilize the libraries from ROS. 1.
Odometry
Odometry is the use of data from moving sensors to estimate change in position over time. These sensors can be wheel encoders and IMU. To supply Navigation Stack with an odometry source we will assess our robot’s motion model and use the estimated pose pose from this model as an input for the Monte Carlo Localization.
has a pose a pose where , are the coordinates of the robot in the world frame and is called the heading angle. angle. Also the robot has a pair of linear and angular velocities
) ) ))
. Suppose that we keep the velocities fixed during a time interval . The robot will have a circular movement around the instantaneous center with radius (2.1) and (2.2) are the relation between the linear coordinates in the pose and the coordinates of .
(2.1)
(2.2)
Using the same relation of (2.1) and (2.2) for the
instantaneous center we we have:
and the new pose
at
(2.3) (2.4)
Combine (2.1), (2.2), (2.3), (2.4) we have the relation between between and (2.5)
Denote to be the velocities of the right and left wheels respectively. Also, we have to be the distance between between the two wheels. wheels. From the the circular movement movement in , we have the relation:
(2.7)
(2.8) (2.9)
Manipulating the equations from (2.7) to (2.9) we have and
. Substitute and
into
[ ]
(2.5) and then substitute have:
,
we
2
is a map of an area of 20x12m with each pixel covering an 2 area of 1cm and the robot could travel from one side to another with great accuracy.
(2.10)
Where and are the changes on the encoder count after each time interval. is the number of counts per revolution revolution and is the wheel’s wheel’s diameter. diameter. The above relation is indefinite when , this is the case when the robot moves in a straight straight path. We can overcome this by finding the limit of (2.10) when , which is:
(2.11)
Also when the change in the heading is not so large, this limit can also be applied to other values of .
Since the motion of the robot follows a twist manner, manner, rather than the the rot-trans-rot , it is useful to find and from and to feed to the control loop of each wheel. From (2.7) to (2.9), we have:
(2.12)
(2.13)
2. Building a map map
To make a map, one has to rely on the self-localization process. process. In this paper, we applied the grid mapping mapping application from the ROS community to build up simple maps. We assumed that the level of accuracy from our equations for odometry is sufficient so that the self-localization process can use the output from this as the actual location of the robot. Fig. 2 2(a) is a map of an area of 4.6x3.5m with only plain walls and doors. In this area, the accumulated error was low enough so that the ROS mapping process gave a good map with each 2 pixel covers an area of 1cm . In Fig. 2(b), each pixel covers a larger area of 5cm2 in a room which is 45m2 large. In this case, the scenery of the area contains many features many features so so an interesting interesting combination of visual and encoder odometry as introduced in [4] can be applied for a better result. (a)
(b)
Figure 3. Map of a floor in a building drawn and given as prior knowledge to the robot.
3.
Monte Carlo Localization
According to [6] the Monte Carlo Localization is arguably one of the most popular method for the robot’s global localization. Within the terminological expression of probabilistic probabilistic robotics, this approach approach hypothesizes the robot robot motion in a probabilistic manner and takes in account the odometry odometry value, the measurement and a pre-known map to describe the belief on its current pose by a particle filter. In ROS we can adjust the parameters of the amcl package by giving values to the node’s paramet node’s parameters ers when when launching it to the computational graph.
̅ ̅ ̅ ̅ ̅ ̅ } { ( | ( |
To understand the basics of MCL algorithm, let us denote to be a pair of odometry values and which are the estimated poses from the odometer in the last time interval. One can obtain using the method that we have applied to come up with the equation (2.10). Denote to be a measurement consisting of individual range values from a popular laser scan model Denote to be a map consisting of to points that will describe the environment environment in a feature-based feature-based or location-based structure. In feature-based map, the member will consist of the coordinates of an obstacle in the environment. In location-based map, will contain a number expressing the certainty a cell of the map being occupied by an obstacle. The belief of the robot can be modeled by a possibility distribution function that conveys the meaning of the possibility of a pose when the odometer returns , the sensor gives us a set of measurement and the prior knowledge knowledge was was given as a map .
Figure 2. Scanned maps using odometry data and Kinect sensor and ROS grid mapping package.
Prior knowledge needs not to be acquired only from exploration. As long as it is accurate and sufficient, both methods would help the robot localize itself accurately. Fig. 3
Also we will denote to be the set of M particles, where a particle is a specific pose of the robot. The intuition behind the particle filter is that though one can find from the basic componential distribution functions, such as the Gaussian or uniform ones, and the relation between then such as superposition or convolution, this combination is not a convenient way to describe a
mathematical function. Instead, we can draw out a set of significantly large amount of values from this function using the underlying distribution. This set of particles is convenient to calculate the change of in time while still reflecting the characteristics of .
((||
Another attractive feature of the MCL is that it doesn’t require a rigid model of the robot’s kinematics that describes the rule of the expected odometry output when given a control input. Instead, it cares only for the previous and current values of the output. Whereas there is an uncertainty in the model by modeling or by interference, the output will inherit all. Thus, the MCL method will acknowledge a level of uncertainty in the output and reduce it by applying other knowledge from the measurement model. Let us consider a very sensational illustration from [7] in Fig. 4. Assume that we have a robot traveling along a corridor with three identical doors. We have a sensor that can tell whether there is a door next to the robot or not. The map here is omitted due to the single dimensionality of environment. We can express the possibility whether the sensor will acknowledge the presence of a door along the hall by the distribution as the red diagrams in Fig. 4.
One can now begin applying the Monte Carlo algorithm by generating particles uniformly all over the corridor like the diagram in Fig. 4(a). Now when the sensor on the robot signals the presence of a door, based on the distribution , the robot knows that the particles at the neighborhoods of the door in its pre-provided map will have a greater possibility of being the true pose of the robot. The Monte Carlo algorithm upon acquiring this measurement will assign a weight factor to
each particale . The height of each particle in the diagram of Fig. 4(b) will then d etermine the rate this particle is repeated when we resample the particle filters. After resampling, the diversity of the particle will reduce because some low-weighted particles have died out while the highweighted particles have been replicated during the resampling process. However, when we take in account of the odometer during the robot movement, the diversity of the particle filter will recover when each particle is resampled using the posterior distribution . We can see the new particle filter after resampling in Fig. 4(c) now has become denser in some regions corresponding to the doors’ position in the map shifted to the right in an extent relating the robot motion.
Now the same process is repeated again and we can see that most of the weight mass has concentrated around the second door. Also, other previously dense regions were suppressed and the high bumps in affect only a few particles in Fig. 4(d). Now the resampling process will clean out much of the ambiguous particles and shift the whole set to an amount corresponding with the robot motion as illustrated in the diagram in Fig. 4(e).
By understanding the basics of the MCL algorithm, one can utilize the amcl package supported by ROS Navigation Stack. The algorithm in Fig. 4 below will describe the primary steps in the MCL method while details on the adaptive resampling step from line 8 to 11 can be found in [7].
Figure 4. Illustration for Monte Carlo Localization from [5]
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:
1 ∅ 1 ∝
Algorithm MCL ( χ χ
χ
for
to do
χ
endfor for
sample_motion_model ( measurement_model
χ +
)
to do
draw with probability
add to χ endfor return χ
Figure 5. Basic steps of the MCL algorithm.
In the algorithm above, from a previous set of samples , line 4 will apply the sample_motion_model algorithm to each particle in using the odometry data . The algorithm measurement_model in line 5 will assign the weight for each particle according to the measurement and the map . After these two steps we will obtained a weighted particle filter , the resampling step will then draw out M new particles based on their weights and add them to . It should be noted that the diversity of the particle filter is only recovered after line 4 in the next loop of this MCL algorithm.
The sample_motion_model used in the algorithm above depends on the reversed kinematics of each robotic platform. For a differential steered drive robot, we can apply the following algorithm.
2( ̅ ̅ ̅ ̅ ̅ ̅ ( 3 4 ’’ sicosn
1: 2:
Alg. sample_motion_model_odometry ( ӯ ӯ
3:
4: 5: 6:
– sample( )
7: 8: 9: 10 : 11 :
+ + return
The first source of noise will have a Gaussian form around as shown in Fig. 9(a). Fig. 9(b) shows an exponentially decreasing probability of noise from expected objects in the sight of the robot. Fig. 9(c) shows the saturation noise of a laser ray with and finally Fig. 9(d) shows a uniform unexpected noise all over the range of the random variable . The last diagram in Fig. 9(e) is the normalized combination of the four componential sources of noise.
Figure 6. Sampling algorithm for a probabilistic motion model using odometry.
In the above algorithm, the robot movement is considered a sequence of three moves: one rotation followed by a translation and finally a rotation as illustrated in Fig. 7. Line 2 to 5 in the algorithm sample_motion_model_odometry will find the reversed kinematic solution for the three moves from the odometry values and then add them with random values. These values are assumed to have a basic distribution like a Gaussian or triangular with mean value at zero and the variance is the input of the sample function like the one below: Figure 9. Probabilistic model of a laser scan sensor
6
11 11 11
1: Algorithm sample_normal_distribution (b): 2:
return
3: Algorithm sample_triangular_distribution (b): return 4: . Figure 7. Sampling algorithm of normal and triangle distribution from a uniform distribution.
Fig. 8 describes the cluster of 500 sampled values with different parameters for the errors on each move.
1 | |
Figure 10. Weighting algorithm for particles model of range sensor.
Figure 8. Sampled locations from a posterior
in a map using probabilistic
| ∏ .
The measurement model in the MCL algorithm employed a model of the laser scan sensor. The probabilistic model of a laser scan is built upon the combination of the distribution form of the four sources of noise. Assuming that the map is accurate, given a pose , we can ray trace to find the correct value .
1: Algorithm beam_range_finder_model ( 2: 3: for k = 1 to K do 4: compute from using ray tracing 5: 7: 8: return q
The loop from line 3 to line 7 in the algorithm in Fig. 10 will multiply all of the possibilities of the range values based on the approximation . The parameters , , are the characteristics of a specific sensor and must satisfy the condition + + = 1. An algorithm to track out the parameters , , , from a practical data set of measurements can be found at [8].
4.
Sensor
One of an important feature in the implementation of our mobile navigation system is the Kinect camera. Released on Nov 2010, Kinect uses a breakthrough technique called structured-light imaging to measure the distance to objects. The device uses a 12VDC 1A supply with a vertical FOV of 0 0 0 43.5 + 27 tilt angle and the horizontal of 57 , Kinect can sense the objects from 0.4 - 7m in front of it. While costing around 100USD, Kinect requires less computational workload than a stereo camera and can obtain a measurement with the amount of information equal to hundreds of laser scans. The addition of Kinect sensor helps detect obstacle near the ground that cannot be seen by a laser scan sensor. In the project, since the amcl package will subscribe to a sensor_msg/LaserScan as the input for the Monte Carlo Localisation algorithm and that the ROS point cloud generated from Kinect is structured, we can easily extract a 2D scan from this sensor_msg/PointCloud and publish this over the computational graph.
communication frame as an error checking for transmission line.
Figure 12. The block diagram of the control circuit of the mobile base, the orange shades indicate buses of supply power and the blue buses are data lines.
Figure 11. Laser scan extracted from the PointCloud data.
5. Mobile base
In our project the mobile base was designed to collect the set values derived from equation (2.12) and (2.13). The base uses an STM32F4 controller to implement the control loop with all of the variables being automatically collected from the peripherals. This leaves the maximum computational power of the processing core for the control loop of the two wheels. Also, several advanced-control timers of STM32F407 were properly configured to help automatically collect the displacement and velocity on each wheel without any interrupt or polling routine. The variables are stored exactly on the data registers of the peripherals and are updated automatically. Also the communication between MCU and ROS is speeded up so much with the cooperation of USART and DMA peripherals. A long frame of data transmitted with a baud rate of hundreds of thousands bps will be arranged to the memory by the DMA upon a request of the USART. Also, a real-time Operating System freeRTOS was embedded to the MCU. The kernel of this firmware was utilized to provide management for the control tasks. This kernel also provided an accurate millisecond timer for the time stamps that were added to the PC-MCU
Figure 13. Embedded program with minimized workload for fetching variables using dedicated peripherals.
6. Path Planner Rather than making a step by step motion, Navigation Stack will control the robot with a twist motion consisting the linear and angular velocities. How much the robot “twists” depends on the cost value of a cell on the map and the kinematic constraints of the mobile base. Intuitionally, when given a robot footprint, the path planner will calculate an inscribed radius from this footprint. When the robot senses an obstacle, it will inflate this obstacle for a radius corresponding to the inscribed radius. After the inflation stage, the path planner score all possible trajectories and choose a path with the best score to drive the mobile base with the associated velocities. Fig. 8 is the content of a yaml file with kinematic constraints of the mobile base that we declared to the path planner. The units
of these quantities are in m/s for velocity, rad/s for rotational 2 velocity and m /2 for the accelerations. TrajectoryPlannerROS: max_vel_x: 0.1884955592 min_vel_x: 0.09424777960769379 max_rotational_vel: 0.3926990817
(a)
(b)
min_in_place_rotational_vel: 0.3534291735288517 oscillation_reset_dist: 0.05 acc_lim_th: 5.0 acc_lim_x: 4.0 acc_lim_y: 4.0 holonomic_robot: false Figure 14. The kinematic constraints of navigation robot
III.
(c)
(d)
FIELD E XPERIMENT
A. Setup for the prior knowledge In this scenario we created a map for an area of 3.6x4.5m2. We then set up the initial pose of the robot in this map. As a rule of thumb, it is best to make the laser scan from this pose highly fit with the map. The particle at the beginning spread over a large region since the Monte Carlo algorithm wasn’t operating. The orange cluster in Fig. 15 is the projected obstacles on the ground and the red lines are the laser cuts from the PointCloud at the same height with the Kinect’s position on the robot.
Figure 16. The robot’s movement to the set goal and the convergence of the particle filter.
C. Path plan’s adaptation In Fig. 17, when we reset the robot’s goal, th e robot initially reused the former map and path plan. However, this time we blocked the way so that the robot had to come up with a new path.
Figure 15. The actual experiment area the robot ’s initial pose in the constructed map of this area.
B. Navigation and update process on local map and particle filter Fig. 16 is the navigation process of the robot. The n ew obstacles were added to the initial map. As the sensor then performed measurements to the environment, the particle filter begins to concentrate on the true pose of the robot. It should be noted that though the laser scan could not sense the object on the ground in Fig.16(a), the PointCloud data from Kinect still helped the robot detect th is obstacle in Fig. 16(b).
Figure 17. The robot’s adoption of new path.
D. Experiment with man-made map. Fig. 12 shows the journey of the robot in the drawn map, In one experiment, the robot has no problem travelling from one location at this corner to another in this map. Also a plotting process was also applied to help us review the estimated path from odometer.
from the path planner totally. This can be improved by a new design with the center of rotation overlapping with the center of the robot’s footprint. ACKNOWLEDGMENT
Figure 18. The robot’s navigation in the drawn map
IV.
CONCLUSION
The interactive approach within the framework of Navigation Stack was an effective method for mobile robot navigation. The robot could rely on the prior knowledge given to it and update it with new obstacles during the operation while still able to localize itself in this map. During operation the path plan can adopt to new observation. The achievements of the method applied here can be the basis for many useful robot platforms such as outdoor auto-driving vehicles and indoor assistant robot. There have been some limits in implementing the Navigation Stack in this paper. Firstly, the odometry was performed from a single source of encoder only. Thus the map construction process was limited by the accumulative error from odometry. Better odometry can be obtained from combining a high-quality IMU with the results from visual odometry as announced in [4] or [5] with some effort to integrate the hardware with ROS. Also, due to the asymmetry of the mobile base, the robot did not achieve the flexibility
This paper was completed under instructions and revisions from our lectures of Department of Automatic Control, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City of Techonoly (FEEE-HCMUT). The researches in this paper were generously supported from th e senior members of FEEE’s Club for Scientific Research. We also would like to express our appreciation to Mr. Phuoc T. Lai for his sharing in applying ROS to our robot platform for mobile navigation. [1]
R EFERENCES D. H. Nguyen, D. V. Nguyen, “Mobile Autonomous Robot using Kinect - MARK ” , undergraduate thesis, Jan 2012.
[2]
R. Mojtahedzadeh, “Robot obstacle avoidance using the Kinect”, Graduate Thesis, 2011.
[3]
A. M. E. Fernández, “ Navigation Stack - Robot Setups”. [Online]. Available: http://www.packtpub.com/article/navigation-stack-robotsetup.
[4]
P. T. Lai, “Finding the robot’s trajectory using Kinect and Robot Operating System platform”, undergraduate thesis, June 2013.
[5]
H. D. Ho, “3D Positioning using Stereo Camera”, FEEE’s SRC, Jan 2013.
[6]
S. Thrun, W. Burgard, D. Fox , “Probabilistic Robotics”, page 188.
[7]
S. Thrun, W. Burgard, D. Fox , “Probabilistic Robotics”, page 202.
[8]
S. Thrun, W. Burgard, D. Fox , “Probabilistic Robotics”, page 264 -265.
[9]
S. Thrun, W. Burgard, D. Fox , “Probabilistic Robotics”, page 124 -12
Controlling an Inverted Pendulum by an LQR Regulator with Feedback Information from Camera Duy-Thanh Dang
Tan-Khoa Mai
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Abstract — This paper presents the general concept of an LQR regulator and a basic algorithm combined with Kalman filter to identify the states of an inverted pendulum system using camera. Besides using encoder to control this system, this paper proposes a different approach for a control system of the inverted pendulum by using information acquired from normal camera. These two methods separately will provide difference in the control quality. Keywords — Nonl in ear Kalman fil ter
I.
contr ol
systems,
I mage
Processin g,
I NTRODUCTION
Being an under-actuated mechanical system and inherently open loop unstable with highly non-linear dynamics, the inverted pendulum system is a perfect test-bed for the design of a wide range of classical and contemporary control techniques. Its applications range widely from robotics to space rocket guidance systems. Originally, these systems were used to illustrate ideas in linear control theory such as the control of linear unstable systems. Their inherent non-linear nature helped them to maintain their usefulness along the years and they are now used to illustrate several ideas emerging in the field of modern non-linear control. The control task is to swing up the pendulum from its natural position and to stabilize it in the inverted position, once it reaches the up right equilibrium point. The cart must also be homed to a reference position on the rail. There are many types of inverted pendulum system, in this paper, we focus on the system that consists of a cart and a pendulum moving in 2-D direction. Therefore, it‟s much easier to process the image. Controlling an Inverted Pendulum has been a long-time concerned; most of the reasearches concentrate on controlling it with the help of encoder which is the easiest way to acquire information about the system. In this paper, we open a new way to control this system: using the camera. By applying image processing, which is one of the most developed trend worldwide recently, the flexibility in control of a non-linear is improved significantly. On the other hand, the results acquired from image processing can be applied to other practical application.
The paper is organized as follows: Section 2 presents a brief overview of the LQR, Kalman Algorithm and Image Processing; Section 3 deals with the mathematical dynamics model of the system and construction of a LQR regulator; Section 4 goes through the mainsteps in the design of the control process; Section 5 shows the results in simulation and in reality of both using encoder and using camera and have a concise conclusion about the subject. II.
U NDERLYING THEOREM FOR THE CONTROL SYSTEM
This section presents the fundamentals of LQR regulator which maintains the system at the balance point thanks to Lyapunov criteria of stability. Combining with the “swing-up” algorithm, the system is able to reach the unstable balance point from the stable one and retains this state through time. Beside that, the algorithm of Kalman filter is also analyzed and is integrated in the image processing to determine the system‟s state as the input of the LQR regulator. A. LQR regulator:
̇ ̇ ∫
Assume that a system is represented by state equation:
(1)
For a linear system the function can be described by the multiplication of inputs with the characteristics matrices (u ≠ 0):
We will find the matrix
(2)
of the optimal control vector:
(3)
The matrix must satisfy the criteria J to reach the minimum value: (4) Where Q is the positive squared matrix (or demi-positive squared), R is the symmetric positive matrix. We can prove that the control rule given in (3) is the optimal rule.If we can find matrix K so that the optimization criterion J can be minimized, the controlling rule will be optimized regardless of the initial system‟s state .
Based on the Lyapunov criteria, the optimal criteria J is defined:
̇ ̇
(5)
With S is the solution of the equation:
(6)
When S doesn‟t change over time ( , the Algebraic Riccati Equation (ARE) can be obtained from the following:
(7)
The solution of this equation can be acquired by using a computing tool such as MATLAB.
matrix modeling the system(A,B,C), then the covariance is also updated.
From line 4 to 6, the new probability of system‟s state is
recalculated by applying the measurement signal to the probability calculated before. is called Kalman Gain, which specifies the degree to which the measurement is incorporated into the new estimated state. Line 5 manipulates the mean, by adjusting it in proportion to the Kalman gain K and the deviation of the actual measurement and the predicted measurement ( ).
̅
Illustration with 1-D Vector:
After that, the controlling rule for system‟s input can be
found as:
(8) (a)
B. Object Tracking Algorithm:
Object Tracking Algorithm helps determine the probability of the current system‟s state based on the probability of the previous state, the current measurement and control. Gaussian Kalman Filter: Gaussian filters constitute the earliest tractable implementations of the Bayes filter for continuous spaces. Gaussian probability is represented by multivariate normal distribution equation:
(9)
The density of x is characterized by 2 parameters: the mean µ and the covariance ∑. Kalman filter algorithm uses the measurement and physical model with Gaussian noise to give the approximate value of th e system‟s state with higher accuracy than using only measurement signals. The pseudo code of Kalman filter is shown below:
̅ ̅ ̅ ̅
The filter‟s input are the probability of system‟s state at are the doublet ( and the measurement signal at .
, the control
In the “Predict Step” at line 2 at 3 of the algorthim above, the predict values represent the probability of the next state which is calculated by the control signal , and the
Figure 1: Demonstration of
Kalman Filter‟s concept with 1 -D vector
a-
Initial probability distribution
b-
A measurement appear with an uncertainty
c- New probability distribution after integrating the measurement using the Kalman filter algorithm d-
The probability distribution after motion to the right
e-
A new measurement acquired
f- New probability distribution is calclulated as step c The basic concept of Kalman Filter can be figured out through using 1-D vector. But in reality, a system consists always of more than one state. So, between these states, there are the realations called correlation which determine the result of Kalman filter, since the filter using covariance as a prameter to determine the Kalman Gain . This which is a strong point of Kalman filter, helps us obtain the unobservable states from the observable ones.
However, Kalman filter depends strongly on the initial value as well as the physic model applied, the covariance and the determination of Gaussian noise.
C. Image Processing: We need to obtain the system‟s state by using only a normal camera. To do this, two markers are sticked on the cart and the pendulum as specific characteristics of image. 1. Tracking markers: By using an algorthim of comparing color, the unecessary objects will be filtered out.
2. Tracking system‟s states: After obtaining the position of two markers, we can have the position of cart and angle of pendulum. Nevertheless, the LQR regulator requires the speed of cart and pendulum. Another Kalman filter is used to estimate the required speed. With appropriate parameters for the Kalman filter, acquired from real experiences, Kalman filter works quiet well since it give out the speed value corresponding to the value of encoder. Image taken from the camera is delayed about 30-35ms due to the camera speed. Therefore, we will reflect on this problem when controlling this system by using camera III.
Figrure 2: Image obtained after filtering the color
The image„s remain now is only object with marker‟s color and noise. An “Erode” and “Dilate” alogrithm is considered to bring back an image with better quality in black and white.
MODELISING THE I NVERTED PENDULUM SYSTEM A ND CONSTRUCTING LQR REGULATOR
The inverted pendulum system contains two part: the cart traveling on a specific rail and a rod putting on the cart. The
whole system‟s dynamic is supplied by a DC motor.
Figure 5: Kinematic structure of the system Figure 3: After reducing noise
After that, two seed-points are set in the region of two markers. A rectangular whose seed-point is the center, is draw. Thus, we compute again the mean value of all of the white pixel in the region. The result lead to a point which is the center of all of the white pixel (center point). Then we update seed-point according to the point calculated. These step are repeated, so that we can keep track of two markers on the system.
Since the LQR regulator requires a linear model with appropriate matrix A, B. We will modelize the inverted pendulum system and linearize it around the balance point. A. Modelization:
Figure 6: Forces analyzed on the cart and the pendulum Figure 4: Two seed-points are set
However, if the marker move so quickly, in the next image frame, the marker go out of the rectangular. In this case, we
won‟t be able to acquire information from the marker untill it go back to the rectangular‟s position. At this point, the
Kalman is used to improve the quality of tracking marker. Instead of waiting for the center point of next image to update seed point, the Kalman “predict step” will bring the seed-point
to where the posibility of marker‟s position is higher before
applying alogrithm of finding center point. This reduce considerably the posibility of losing marker when it move. For two markers, we use 2 Kalman filter to estimate its position.
Applying Newton‟s 2nd law at the centre of gravityof the pendulum along the horizontal and vertical components yields:
̈ ̇
(10)
(11)
Taking moments about the centre of gravityyields the torque equation:
Applying Newton‟s 2nd law for the cart yields:
(12)
̈ ̇ )̈ ̇ ̈̈(( ) ̈ ̇̇ ̇ ̈ ( )̈ ̇ ̈() ̈ –̇ ̇ ̇ ̇ [ ] [] ̇ ̇ () ()
̇ ̇
(13)
From (14) (15) (16) (17) we obtain:
(14)
Linearizing around equilibrium point , và
with
(16)
(17)
Assume that:
The states‟ equations of the inverted pendulum:
(18)
Trong đó
(19)
(20)
R=1
(15)
(21)
The dianogstic of matrix Q defines the reponse time of four states: . The bigger coefficient, the faster that element returns to 0. Matrix R represent the power consumed through the controlling process. The raising of coefficient of R lead to the reduction of power consumed. To control this system with continuous time, at this point, we can find matrix K by using this command: K= lqr(A,B,Q,R) Since we implement our controlling in microprocessor, a disrecte-time model is required. MATLAB helps up identify again the matrix A and B in disrecte-time with predefined sample time. And then Kd is recalculated: Kd= lqrd(A,B,Q,R) The rule of control signal is described: U=-Kx Which is implemented in the microprocessor. IV.
CONTROLLING THE I NVERTED PENDULUM
To bring the pendule from the stable equilibrium point to the unstable one, we have 2 steps of control. A. Swing-up
The “Swing-up” step is used to bring the angle of pendulum to the unstable equilibrium point.
With
Considering motor
as the tension applied to the DC
B. Construction of LQR regulator: After modelizing the inverted pendulum system, we obtain two principal matrix A, B.
̇ ̇
Θ _dot 0
Θ _dot 0
Figure 7: Region supplying power to DC Motor
Next, we need to identify matrix Q and R which determine
the reponse‟s quality of regulator:
According to the figure 5, when the pendulum rotate clockwise ( ) and the angle stays between ), the cart is supplied with power to move to the right in order to increase the pendulum mechanical energy. Applying the same technic when the pendulum rotate counterclockwise ( ) and the angle stays between ) with opposite direction of cart. Repeating these two
step until the angle‟s condition is satisfied
B. Controlling LQR
Vi tri xe (x)
After reaching the neighbor of equilibrium point, the controller switch to the LQR regulator that the output follow the rule:
0.2
) m (
0 -0.2
0
1
2
3
4 5 6 Van toc xe (xdot)
7
8
9
10
0
1
2
3
4 5 6 Goc con lac (theta)
7
8
9
10
0
1
2
3
7
8
9
10
-20 0
1
2
3
4 5 6 Gia tri dieu khien U
7
8
9
10
1
2
3
4
7
8
9
10
) s / m (
So that the pendulum is maintained at unstable balanced point.
5 0 -5 1
) d a r (
0 -1
V.
R ESULTS AND CONCLUSIONS
A. Simulation results
̇ ̇
) s / d a r (
With Q:
) V (
4 5 6 Van toc goc con lac (thetadot)
20 0
50 0 -50 0
5
6
Figure 9: Result when having a delay of 35ms in feedback information
R=1
K obtained according to new matrix:
We have K:
According to Fig. 8, all of the values converge 0 in 3 seconds. The value of control singal osscilate between , which is not very high for the motor.
Vi tri xe (x)
) 0.2 0 m ( -0.2
0
2
4 6 Van toc xe (xdot)
8
10
0
2
4 6 Goc con lac (theta)
8
10
0
2
4 6 Van toc goc con lac (thetadot)
8
10
10 0 -10 0
2
4 6 Gia tri dieu khien U
8
10
20 0 -20 0
2
8
10
) 2 s / 0 m ( -2
) d a r (
0.5 0 -0.5
) s / d a r (
) V (
4
6
Figure 10: Result when having a delay of 35ms in feedback information after changing matrix Q
With the new matrix K, the system‟s reponse looks better.
Figure 8: Simulation results in MATLAB
Fig. 9 shows the response of the control system. As can be seen in the graph, with a delay of 35ms, the LQR regulator can not maintain the whole system at the equilibrium point but osscilating around it.
To improve the quality of control, we change matrix Q to:
In general, the coefficient according to the angle of pendulum is reduced significantly mean while the one according to the cart position is increased. This lead to the decrease of control signal value. However, the system is able to be hold at equilibrium point eventhough the feedback information is
35ms retard. This exploitation shows that it‟s possible to control this system with a normal camera. B. Results using encoder to get the feedback information
Applying the matrix K to the microprocessor with sample time , the corresponding responses of system are shown in figure 12.
Figure 11: The pendulum is stable at the inverted position
Figure 14: System‟s
reponses: from top to bottom, from red to blue: angle, angular speed, cart‟s position, longitudinal speed, control signal
D. Conclusions: The results showed in the parts above present the successs in controlling an inverted pendulum by encoder and camera. With encoder, the system is more stable since the sample time is smaller and is able to swing-up. The controlling by using camera can only maintain the pendulum around the equilibrium point due to the sample time, and the delay comes from the camera. Figure 12: System‟s reponses
From the starting point to the unstable equilibrium point, it takes the system 6 seconds which is much more than when simulation in MATLAB. Futhermore, in reality, the control signal of the system after reaching the equilibrium point varies between a small value. C. Results using camera to get feedback information By using camera and Kalman filter, we obtain the result of controlling the pendulum at inverted point. The controlling system consist of two parts: the application on PC which will
estimate the system‟s states and calculate the control signal
then transfer that value to microprocessor, the embedded system in microprocessor after receving the control signal will emit the pulses to manipulate motor DC. The system‟s reponses is shown in figure 14
To improve the quality of controlling the system by using the camera, a better camera is considered and the better modelisation is required to have more precis controlling parameters. VI.
We would like to thank Assoc.Prof.Dr. Huynh Thai Hoang for his guidance along the time r ealizing this project. We would like to express our appreciation to Ms. Ho Thanh Phuong and Power Electronic Laboratory for lending us the valuable space to realize this project. We would like to thank also the brothers and sisters in PIF for helping us with the necessary equipments and encouraging us to complete this research. Special gratitude to Mr. Lai Thanh Phuoc, Mr. Nghiem Hong Hiep and Mr. Nguyen Huu Huan for their great contribute to this project. R EFERENCES [1]
Huỳnh Thái Hoàng, bài giảng “ Lý thuyết điều khiển nâng cao” và “Nhập môn điều khiển thông minh”
[2]
Khalil Sultan, Inverted Pendulum Analysis, Design and Implementation.
[3]
Johnny Lam, Control of an Inverted Pendulum.
[4]
K.Udhayakumar (2007). DESIGN OF ROBUSTENERGY CONTROLFOR CART - INVERTED PENDULUM. Departmentof Electricaland Electronics Engineering, College of Engineering Guindy Campus, AnnaUniversity, Chennai600 025, Tamil Nadu, India.
[5]
Steven A. P. Quintero, Controlling the Inverted Pendulum, Department of Electrical and Computer Engineering, University of California, Santa Barbara.
[6]
Texas Intruments, TM4C123GH6PM, Digital Signal Controller (DSCs) ,Data Manual.
[7]
Sebastian Thrun, Wolfram Burgard, Dieter Fox, PROBABILISTIC ROBOTICS
[8]
ManuelStuflesser, MarkusBrandner, Vision-Based Inverted Pendulum using Cascaded Particle Filter
Figure 13: Controlling program on PC
As can be seen in Fig. 14, the system osscilates around the equilibrium point with a small variance.
ACKNOWLEDGEMENT
Control
of
an
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
Design The Optimal Robust PID Controller for a Ball and Beam System Quang-Chanh Nguyen
Cong-Pham Do
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
II.
Abstract — This paper presents an application of Shuffled Frog
Leaping Algorithm (SFLA) to optimize the parameters of a robust PID controller. The SFLA is a meta-heuristic searching method inspired from the memetic evolution of a group of frogs when seeking for food. The main idea of SFLA is a frog leaping rule for local search and a mimetic shuffling rule for global information exchange. In this study, the parameters of the PID controller will be limited by the Routh’s stability criterion and robust theorem of stability. The SFLA will work out a PID controller with the best fitness parameters in the robustly stable region. The experimental results in this paper show a higher robustness in our controller model when compared with the normal PID controller. Moreover, the performance of this optimal robust PID controller also exceeds the well-known LQR controller.
BALL A ND BEAM SYSTEM
The ball and beam system often includes the following basic components: a DC motor, an actuator, a long beam, a ball and sensors (a position sensor, an angle sensor). The DC motor will generate a torque to change the angle of the beam. The ball’s rolling motion on the beam and the position of the ball depend on the angle of the long beam. Because of its nonlinear nature and low cost, the ball and beam system has been widely used as the benchmark to test control algorithms [2], [3]. A. Mathematical model of the system
Keywords — SF L A; optimal r obust PI D controll er; ball and beam system
I.
I NTRODUCTION
When designing control systems in the traditional approaches such as optimal control, adaptive control or predictive control, it is usually necessary to have a nominal model describing dynamic behavior of the plant to be controlled. However, nominal models are very difficult or even impossible to obtain in some cases due to the complexity and nonlinearity of the controlled plants, the plants are usually affected by noise (noise can be from temperature, electrical source, signal of sensor,...), the parameters of the plants can be change while the plant is operating. All of the uncertainties above can make the performance of th e plant change or become unstable. This is the driving force behind the invention of other control design approaches which are not affected by the uncertainties. The optimal robust PID controller is a solution that the paper puts forward to solve this problem. This method is based on the classical PID controller, theorem for robust stability [1] and the SFLA.
Fig.1: The ball and beam system
We have the mathematical modeling of the ball and beam [3] system as following:
2m s r r ms gr cos
5
r
(Ce K r ) 2
R J B ms r 2
Ce K r vmotor R
2
( r g sin )
7 Where JB is the beam’s moment of inertia,
is the angle
of the beam from its horizontal balance position, is the
velocity of the beam; is the positions of the ball, r is the velocity of the ball, is a parametric constant of the motor, is the transmitting ratio of the actuator, is the volt supply to the motor, is the resistance of the motor.
22
The state space model of the system obtained by linearizing its mathematical modeling around the equilibrium position
[ ̇ ̇ ] can
y C x
be found as: x A x Bu
The matrices A, B and C are:
0 0 A 0 a 0
1 0
0
0
5 g 7
0
0
0
0
0
T
; B 0 0 0 b0 ; C [1 0 0 0] 1 a1
Where: a0
m s g J B ms x1e
;a1 2
(Ce Kr )2 ( J B ms x1e ) R 2
; b0
(Ce Kr ) ( JB ms x1e 2 ) R
B. Modeling the practical ball and beam system: The sensors used in model include an ultrasonic sensor HC-SR04 [4] and a 400 pulse encoder.
III.
SHUFFLED FROG-LEAPING ALGORITHM
The SFLA [5], [6], [7] is a memetic meta-heuristic and population-based cooperative search metaphor inspired by natural memetics. It is designed to find the global optimal solution, based on the efficiency of evolution on each individual group associated with the exchange of information between the individual groups to find individuals with good fitness in the entire population. A describe the general idea of the SFLA, we assume that a frog population (P) consists of randomly generated individuals of frogs. Each frog will represent a solution. The frogs are sorted in descending order of its fitness. Then they are divided into m different groups, each group has n individual frogs ( P = m * n) with the following categorization rule: the first frog goes into the first group, the second frog goes into the second group, the mth frog goes into the mth group and the (m+1)th goes back into the first group, etc. With n individuals in each group of frogs, we will determine the fittest frog as (X b), the least fit frog as (Xw) and also the global fittest frog as (Xg) ( with the indices b, w, g imply best, worst and global ). Evolutionary process is applied to improve the fitness of individuals with the worst fitness. Evolution is mathematically represented as the following expression:
Di c * rand ()*( X b X w ) W i X wnew X wold Di Fig.2: The ball and beam modeling.
Fig.3: The size of three balls.
The parameters of the ball and beam system are listed in Table 1: TABLE I.
PARAMETER OF THE BALL AND BEAM SYSTEM
Parameters ms (mass of ball) JB(the beam’s moment of inertia) Ce(the constant of the motor) K r (the transmitting ratio of the actuator) R (the resistance of the motor)
Value 0.05kg; 0.2kg; 0.25kg 2 1.6/12 kg.m 0.0924
Where Di represents the "gap" that the individual (XW) will jump to the "position" having the better fitness. If the Xwnew’s fitness is better than Xwold ’s fitness, we will replace Xwold with Xwnew; otherwise, the evolutionary process is repeated with respect to the global best fit frog (Xg ), (i.e. Xg replaces Xb). If no improvement becomes possible in this case, then a new frog is randomly generated to replace the worst frog. The evolution process will continue for a specific number of iterations. After completing the process of evolution in each group, the individual frogs are merged together to be a large population and sorted by descending order according to their fitness. The local evolution and global shuffling continue untill convergence criteria is satisfied. In this study, the SFLA will stop when the relative change in the fitness of the global best frog within a number of consecutive shuffling iterations is less than a pre-specified tolerance or the iterations run reach a predefined number. IV.
DESIGNED THE OPTIMAL R OBUST PID CONTROLLER
A. Stabilizing the system. 29.6 1.107 ohm
The initial open – loop transfer function of the ball and beam system was found as the following transfer function: r ( s) 129.4 P( s) 4 r set ( s ) s 50.54s 3 14.7
This transfer function has one pole on the right half of the complex plane, this causes the system to become unstable. The poles of the system must be relocated so that all poles are on the left half of the complex plane, which stabilizes the system. In order to do that, the state feedback method is used for the first feedback loop.
Ce [0.09;0.0948],R [1;1.22], ms [0.05;0.35],x1e [0.0;0.4]
Then, the weighting transfer function Wm is chosen as follow:
( s2 22 2 s 2 )(s 2 ) 2
Wm (s) K W m
( s2 211s 1 2 )(s 3 )
Where:
1 2 0.6, 1 1.8, 2 4.0, 3 60, K W m
12 2 *(1012/20 ) 3
2
Fig.4: Matlab simulation diagram
It can be seen from figure 4 that the inner feedback loop is the state feedback loop, which feedback from the four states T
including: r , r , , . The state feedback gain was chosen with the following values: [ Kr , Kr , K , K ] [5.3378, 4.3394,13.7388,0.4221] The nominal form of the closed-loop transfer function: r ( s) 129.4 P(s) 4 3 2 r set (s) s 58.34s 253.9 s 561.3s 587.6 The four new poles are; s1 2.1325, s2 53.8156, s34 1.1976 1.9198i B. Choosing the weighting transfer function Wm. When considering the uncertainties due to the error when determining parameters of the system such as the DC motor constant which is related to the back electromotive force C e; the motor resistance R and changes in the weight of the ball m s and the equilibrium point x1e. In order to make sure that the state feedback system is still stable under the effects of the uncertainties above, the inverse multiplicative uncertainty model [1] is used.
Figure 6: Bode diagram of W ( j ) (red line) and m
P( j ) P ( j )
(blue lines) 1
C. Determining the robustly stable region for the parameters of PID controller. Transfer function of PID controller: K C(s) K p i K d s s The characteristic equation of th e close-loop system. 1 C ( s ) P ( s) 0
s 5 a44 s 4 a33s 3 a 22s 2 a11s a 00 0 Where: a44 d3 , a33 d2 , a22 d1 7b0 Kd , a11 d0 7b0 K p , a00 7b0 Ki Determining the Routh – Hurwitz table [8]: TABLE II.
s5 s4 s3 s2 s1 s0
R OUTH -HURWITZ TABLE
1 a44 S31 S21 S11 S01
a33 a22 S32 S22 0
a11 a00 0 0
Where: Figure 5: The inverse multiplicative uncertainty model
Where:
P
P 1 W m
The weighting transfer function W m must satisfy the condition below. W ( j ) P ( j ) 1 , m P ( j )
The range of the uncertainties are assigned with the following values.
S31 a33
a22 a44
, S32 a11
S 22 a00 , S11 S32 S22 *
a00 a44 S 31
S 21
, S21 a22 S32 *
a44 S 31
, S01 a00
The necessary conditions for the close-loop system to be nominally stable is:
a44 d 3 58.34 0 K d 4.338 a33 d 2 253.9 0 a22 d1 7 d 0 K d 561.3 129.4 K d 0 K p 4.54 a d 7d K 587.6 129.4 K 0 0 0 p p K i 0 11 a00 7b0 K i 129.4 K i 0
The normal PID controller (for ball 0.05kg) was designed based on the “ sisotool” (the tool of Matlab) and was adjusted to suitable with real model as following: PID = [K p, K i, K d] = [-2.5471, -1.8767, -0.167]. The test compares the experimental result of the optimal robust PID controller with the LQR controller. The initial condition x0 = [-0.5 0 0.275 0]T and the set value is x = [0 0 0 T 0] .
The sufficient conditions for the close-loop system to be nominally stable: a22 561.3 129.4 K d 0 S31 a33 a 253.9 59.34 44 a44 a a a22 (a11 00 ) * 44 0 S21 a22 S32 * S31 a44 S 31 a S S11 (a11 00 ) a00 * 31 0 a44 S 21 S a 129.4 K 0 01 00 i
K d 110.133 S 0 21 S 11 0 K i 0 Then, the necessary and sufficient conditions for the close-loop system to be nominally stable: 110.133 K d 4.338 K p 4.54 K i 0 S 0 21 S 11 0 The condition to ensure that the close-loop system to be robustly stable under the effects of the uncertainties.
Wm ( s) 1 C ( s )P ( s )
Figure 9: Experimental result for m s = 0.05kg
1
D. Working out the robustly stable optimized PID controller. Running SFLA on Matlab and the result after 50 evolution iterations, the parameters for the robustly stable optimized PID controller are: K p
14.8574, Ki 4.7462, K d 7.0462
Also, the fitness value corresponding is: J = 172.5301. V.
EXPERIMENTAL R ESULTS
To have an objective view of the features of the optimal robust PID controller, the LQR controller and the n ormal PID controller are designed to compare with the optimal robust PID controller. Based on the state space model of the system and the help of Matlab (command “lqr”), we found the LQR parameters (for ball 0.05kg) as following: LQR gain [ Kr , K , K , K ] [5.3378, 4.3394,13.7388,0.4221]. r
Figure 10: Experimental result for m s = 0.25kg
The above graph is the experimental result of the LQR, the lower graph is the experimental result of the optimal robust PID, and the red line is the desired position. It can be recognized that the settling time in both controllers is less than 5 seconds when the mass of ball is changed. However the steady-state error of the LQR controller is different from
zero when increasing the mass of ball and the steady-state error of the optimal robust PID controller is sustained at zero. The test compares the experimental result of the optimal robust PID controller with the normal PID controller. The T initial condition x0 = [0 0 0 0] and the set value is x = [-0.2 0 T 0 0] . The above graph is the experimental result of the normal PID, the lower graph is the experimental result of the optimal robust PID, and the red line is the desired position.) It can be recognized that when the mass of ball is 0.05 kg, the quality control is not significantly different in two controllers. However when the mass of ball is 0.2 kg, the normal PID does not hold the ball in the desired position (the ball oscillates around the desired position), and the ball is held in the desired position when using th e optimal robust controller. VI.
CONCLUSION
The paper has presented the first attempt in designing optimal robust PID controller by using the SFLA. Comparing to the LQR controller, experience results shows that the control performance of optimal robust PID controller is better. Comparing to the normal PID controller, experience results shows that the robustness of optimal robust PID controller is better. The limitation of this study is that the optimal robust PID controller is just robust stability, but not yet robust performance. This problem will be investigated in future works.
Fig.7: Experimental result for m s = 0.05kg
Fig.8: Experimental result for m s = 0.2kg
ACKNOWLEDGMENTS We would like to thank all the people who have helped towards the work in this paper. We are especially grateful to Mr. Duc-Hoang Nguyen for his supervision and continued support. Without his help, this paper would never be possible.. Many th anks are given to Mrs. Thanh-Phuong Ho for helping us write skillfully. R EFERENCES [1]
Thai Hoang Huynh, “Robust Controller”. Faculty of Electrical and Electronics Engineering, University of Technology Ho Chi Minh City, January 2013.
[2]
J. Glower, and J. Munighan, “Fuzzy Saturating Control of a Ball & Beam”, The Midwest Symposium on Circuits & Systems, 1996
[3]
Marta Virseda, “Modeling and Control of the Ball and Beam Process using camera”,Department of Automatic Control Lund Institute of Technology, March 2004.
[4]
Ultrasonic ranging module :HCSR04. http://www.tme.vn/upload/pdf/HC-SR04.pdf.
[5]
Majid Kamkar Karimzadeh. “Improved Shuffled Frog Leaping Algorithm for the Combined Heat and Power Economic Dispatch”,HCTL Open IJTIR, Volume 2, pp 89-90, March 2013.
[6]
E. Elbeltagi et al. “A modified shuffled frog -leaping optimization algorithm applications to project management”,Structure and Infrastructure Engineering, Vol. 3, pp 54-55, March 2007.
[7]
Duc-Hoang Nguyen, Thai-Hoang Huynh (2008), “Tuning of a Fuzzy Logic Controller for Balancing a Ball and Beam System by a Shuffled Frog Leaping Algorithm”, 10th IEEE International Conference on Control, Automation, Robotics and Vision (IEEE-ICARCV 2008), Hanoi, Vietnam, 17-20, 2008.
[8]
Thi Phuong Ha Nguyen, Thai Hoang Huynh. “Survey stability of the system”, Theory of automatic control, Publisher National University. Ho Chi Minh City, 2011.
[Online].
Autopilot Multicopter using Embedded Image Processing System: Design and Implementation Gia-Bao Nguyen-Vu
Dang-Khoa Phan
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Abstract — This paper presents a method to design and control the Autopilot Multicopter using embedded image processing system. The design is based on two closed-loop controllers. The first one (autopilot controller) will read the multicopter’s position and generate the control signal for the second (stabilizing controller) to drive the dynamic model. The multicopter can identify its position in 2-D space thanks to an onboard camera and image processing program on an embedded board to recognize markers using centroid determination algorithm. Sonar sensor will also be used to get information on the altitude. The system is implemented on the Beagle Bone Black embedded platform running Robot Operation System and OpenCV. A ground station is built on a computer for general supervision and control. The results and future development for this project are also discussed in this paper. Keywords — autonomous aerial vehicles; avionics; helicopters;
aerodynamics; multipropeller platform; control systems.
I.
I NTRODUCTION
Multicopters have been used more and more widely in military as well as civil and r esearching applications along with the growing of unmanned aerial vehicle (UAV). With the advantage of simple mechanic structure and versatile operation, it is effective to use these flying machines for indoor applications, missions in small range area or those that require slow and steady movement like aerial filming, inspection, searching and emergency rescue, etc. However, multicopters are sensitive to wind, changing pressure or non-identical operation of motor which lead to drifting and instability, therefore reduce reliability and precision. This is because even multicopters that use an inertial measurement unit (IMU) to maintain balance don’t have any mechanism to feedback the position of the model in the air. To overcome this problem, some solutions have been applied such as using GPS [1] or vision system which is presented in [2]. These methods have some limits in certain conditions. GPS is fine for wide range operation in clear area, but becomes much less accurate in small range and non-open environment. Vision system uses a series of stereo cameras placed in fixed positions to view identify the multicopter’s position. Such a system has been developed by the Flying Machine Arena laboratory of ETH Zurich University in Switzerland, which proves its great effectiveness in many complex control tasks and trajectory generation for indoor quadcopters such as “dancing” quadrotors [3], building tensile structures [4], balancing and throwing/catching pole [5]. However, although vision system gives perfect operation but it’s only possible in the fixed view range of camera, the experiment lab, for example. In this paper, This project is sponsored by the Student Science Research Fund of Ho Chi Minh city University of Technology.
a new method will be introduced to feedback the multicopter’s position in 2D space using a camera mounted on the model and an embedded image processing system. First, some basic descriptions of the dynamic model will be stated: the structure of the model, aerodynamic analysis, how to balance the model. After that, the method of applying image processing on embedded computer board Beagle Bone black running Robot Operating System and Open CV will be presented, as well as the building of ground station on a server computer for supervision and control. Finally, some results and a plan for future development will also be discussed. II.
MULTICOPTER DYNAMICS
A. Aerodynamic analysis A hexa-copter model is chosen for its stability and strength in comparison to other structure. The common quadcopter design has been very popular for a long time because of its simple and low-cost hardware configuration with four motors, but this configuration has a limit in payload and tolerance for motor operation failure. Configuration with eight or ten motors has better payload capacity and failure tolerance but the cost is complicated and expensive hardware structure. For autopilot configuration using onboard camera, the hexa-copter reconcile the requirements above and therefore is chosen.
Fig. 1 Dynamic Analysis of Hexacopter [6]
Hexacopter configuration consists of six motors located in an equilateral hexagon. With is the rotating speed of motor I and k is the lift constant, the expression for the thrust created by each motor is written:
(1)
∑ ∑
This force creates a moment where l is the distance between the motor and the center of gravity of the hexacopter. The total thrust given by six motors would be:
(2)
Moreover angular velocity and acceleration of the motor create a torque around its axis, giving b the drag constant and IM the inertia moment of motor I, we have:
̇
Fig. 3 The axis of a hexacopter
(3)
If the torque created by six motors is equal so that they can exterminate each other, the hexacopter keep on its direction. To make this possible, the motors are divided into two groups rotating in opposite direction and arranged alternately.
A stabilizing controller manages three angles with separated PID controller for each one.
Fig. 2 Rotating direction of motors
From the geometrical structure of the hexacopter and the impact of the motors over the body frame, it is possible to get the information on roll, pitch and yaw moment as follow (more detailed calculation may be seen in [6]):
( ) (4) ( ̇ ̇ ̇ ̇ ) ̇ ̇ [ ( ) ]
By adjusting the rotating speed of the motors, the three angles roll, pitch and yaw can be changed leading to various response of the model such as: moving forward, backward, left or right, stay balanced, etc. B. Balancing model Because multicopter is very sensitive to motor moment and any environment impact such as wind or air pressure, balancing the model just by manually controlling the rotating speed of the motors (thanks to a RC controller for example) would be a challenge. An embedded stabilizing controller can ease the task. Inertial Measure Unit (IMU) is used to return information on roll, pitch and yaw of the model and the controller will try to regain stability by automatically regulating angular velocity of the motors when an unbalanced state is detected. To get full control of the model, there are four parameters to take care: the three angles roll, pitch, yaw and the thrust or also called throttle. The C programming code and details for these controllers are presented in [3].
Fig. 4 Stabilizing PID controller
Input control signal in form of 50Hz voltage pulse (duty time from 1 to 2 millisecond) is converted to angular velocity value multiple by 100 (degx100/s). IMU data feedback (the body frame angle) is also scaled into degree and compared to input control signal (the earth frame angle) to return the rate error. The stabilizing controller returns the value of r oll, pitch and yaw rate to the stability patch which has the possibility to transfer these values to PWM and then calculate the right output for each motor. To calculate each motor’s output from roll, pitch and yaw values, it is needed to have a set of proportion so called roll, pitch and yaw factor. These proportions are different for each motor depending on the frame configuration and location of that motor. Giving αi the angle between axis yB and motor I, the roll and pitch factor can be found: (5)
( )
(6)
Yaw factor defines the rotating direction of the motor, which is -1 or 1 for clockwise (CW) and counter clockwise (CCW) respectively. The roll pitch yaw output is calculated as follow:
(7)
The throttle has a separate PID controller which is affected by th e altitude controller as seen in the block diagram shown in Fig.5.
estimator and flight controller) and Beagle bone Black- the image processing unit as illustrated in Fig. 7. The autopilot system has two fundamental functions: manual mode control with RC transmitter and auto mode state control motor based on the reference paths and the current states. Ground Station
RC transmitter Wireless
Wireless
Fig. 5 Altitude and throttle controllers
Now the motor’s output can be calculated:
(8)
Where adjusted throttle is the value of throttle that has been adjusted by the stability patch to meet the requirement for optimal performance, rpy scale is scaling ratio affecting the sensitivity of roll, pitch, yaw angle and output max range is a safe tuning value to ensure the total output stays in the safe range of operation. III.
AUTOPILOT CONTROL SYSTEM
An autopilot is a system used to guide the copter without assistance from human operators, consisting of both hardware and its supporting software. The first aircraft autopilot was developed by Sperry Corporation in 1912 and demonstrated in a hands-free flight two years later. Autopilot systems are now widely used in modern aircrafts and ships. The objective of copter autopilot systems is to consistently guide copter to follow reference paths, navigate through some waypoints, track and follow an object. A powerful copter autopilot system can guide copter in all mode including manual control, ascent, descent, trajectory following [4], object following and landing. Note that the autopilot is a part of the Multicopter flight control system as shown in Fig. 6. The autopilot needs to communicate with GCS-ground station for control mode switch, receive broadcast from GPS satellite for position updates, send data (state parameters, signal strength, battery volage…) back to GCS and send out control signal to all BLDC motors on Multicopter.
Xbee Mavlink
Control System
Multicopter Dynamics
The multicopter autopilot system comprises of the GPS receiver, IMU-the micro inertial guidance system, tracking camera, control system: APM the flight control board (state
+
Sensor Camera Fig. 6 Model of systems
A. System Design Marker
Camera
H
Linux Server
Open CV
USB data
Camera Driver
Raw Image
Marker Recognition
SSH Ubuntu 192.168.7.2
Image Processing
Image View
Coordinates
Beagle Bone
A Multicopter autopilot system is a close-loop control system, which comprises of two parts: the state observer and the controller. The most common state observer is the micro inertial guidance system like IMU including gyro-acceleration MPU6000, and magnetic sensors HMC5883L. There is also other altitude determination devices MS5611 base on atmospheric pressure. The sensor readings combined with the GPS information and Image sensor can be passed to a image processing system to generate the estimates of the current states (2D position of multicopter) for later control uses. Based on different control strategies, the UAV autopilots can be categorized to PID based autopilots, fuzzy based autopilots, NN based autopilots and other robust autopilots. We choose PID controller for our model.
Noise
Radio RC
UART
Fig. 7 Model of image processing unit.
The Ubuntu linux operating system was embedded on Beaglebone Black board and OpenCV library was used with ROS-Robot Operating System [5][6]for processing image and controling the APM board by Mavlink connection. After process this unit provide the 2D relative position of the multicopter to the ground. This information is used for feedback data in the position PID loop-control. The position PID loop is the outer loop (higher control lever) control some inner PID loop such as: stablizing PID, altitude hold PID… (lower control lever), so the requirement of feedback frequency can reduce, 10fps image processing is acceptable.
RC Receiver Manual control
Switch
2D Trajectory
Tracking Control Layer
Position Control Layer x
Position
Y
Attitude Layer h
IMU
angle
Attitude
Image processing unit
Filter Angular Rate Control roll
Pitch
Model
yaw
Actuators
Fig. 9 Control diagram.
IV.
IMPLEMENTATION
A. MEMS inertial sensors Fig. 8 Detecting movement method
Centroids are found using cv2.Moments() function where centroid can be defined as [7]: centroid_x = M10/M00 and centroid_y = M01/M00 (9) Moment:: M ij are computed as follow:
∑(() )
(10)
The central moments Moment::M ij are computed by:
∑(()() ()) ()
) ̅ Where (
(11) (12)
is the mass center:
;
(13)
centroid_x = int(M[ m10 ]/M[ m00 ])
(14)
centroid_y = int(M[ m01 ]/M[ m00 ])
(15)
‘
‘
’
’
‘
’
‘
’
Inertial sensors are used to measure the 3-D position and altitude information in the multicopter frame. The current MEMS technology makes it possible to use tiny and light sensors on control board. Available MEMS inertial sensors include: 1) Micro Inertial Guidance System (IGS): A typical IGS or IMU includes 3-axis gyro rate and acceleration sensors and magnetic sensor, which could be filtered to generate an estimation of the altitude (φ , θ , ψ) The IGS is widely used in big airplanes. A sensor solution for small aerial verhical is to use the combine IMU, which can provide a complete set of sensor readings. MPU6000 from InvenSense Inc and HMC5883L are a kind of micro IGS with an update rate up to 100 Hz for inertial sensors. IMU system includes: - 3-axis gyro: measure the angular rates : p, q, r. - 3-axis acceleration: measure the accelerations : ax, ay, az. - 3-axis magnetic: to measure the magnetic field, which could be used for the heading correction (ψ) 2) Altitude Sensor: Another solution for altitude sensing is using atmospheric pressure. The basic idea of pressure altitude sensor is to measure the pressure difference between the ground and in the air because the ground atmosphere more pressure than the sky. The system uses IC MS5611 of Measurement Specialties.
The normalized central moments: nuij are computed as:
(16)
The center of marker will be detected by camera with Beaglebone embedded board, this will return the x, y movement of the center, and the real movement distance can be calculated based on the altitude.
( )
(17)
The position PID loop will use the distance data to turning the multilcopter lock position.
Fig. 10 ADC value vs Pressure
Its pressure resolution is about 0.042 mbar, so we can convert to altitude resolution is 10cm. 3) Vision sensor and image processing unit : The Flying Machine Arena laboratory of ETH university use Motion Capture System combine eight cameras mounted on the ceiling above the space 10x10x10m [8], so the multicopter can
only work in this space. In our system, only one camera was used, mounted on the multicopter, it provide the ground image, so we can track the movement of the multicopter, and use it for lock position or follwing an object. This is the first time a onboard-image-processing-unit is applied on multicopter. This is also the first time ROS-Robot Operating System is embedded on Beagle Bone for image processing purpose. a) Camera: Logitech HD Pro Webcam C920 has exellent resolution Full HD 1080p at 30 frames-per-second (fps), 20-step autofocus delivers razor-sharp images (from 10 cm and beyond) for every occasion. b) Beagle Bone Black: with processor AM335x 1GHz ARM® Cortex-A8 512MB DDR3 RAM. Incompatible with Ubuntu. B. Ground Station Any automatic system needs a human-handled control at the highest level in case of emergency or necessary interference. A computer running QgroundControl can be a ground station for supervision and control of the multicopter even in manual or autopilot mode. QgroundControl is an open source Qt project which supplies basic platforms and widgets dedicated for air vehicle ground station design. More information about QgroundControl can be found at [13].
Fig. 13 Loops of data and instruction update
V.
R ESULTS
The algorithm was applied to the multicopter shown in Figure 1.The following figures show a plot of the angle and altitude response for the real experiments.
Fig. 14 Altitude response.
The red line is altitude setpoint and the blue line is real altitude (measure by pressure altitude sensor), the state error is about 0.2 – 0.3 meters, quite good with the pressure altitude sensor, we will improve this error by replace pressure sensor with sonar sensor.
Fig. 11 Communication network
Connection between the multicopter and the ground station is built on MAVLink (Micro Air Vehicle Link) protocol.
Fig. 15 Roll response.
Fig. 12 Structure of a MAVLink frame
For appropriate supervision and control of the whole system, data and instruction need to be updated in suitable period. Motor control signal is checked every 10 milliseconds while instructions for operation mode and new autopilot mission are proper to be read at period of 20 milliseconds. Any data and information sent to ground station is placed in a 10Hz loop. The heartbeat package containing handshake information and the multicopter identification renew the connection state every second.
Fig. 16 Pitch response
In Figure 15 and 16, the red one is setpoint, and the blue one is response. The pitch is a little bit delay, but still in good
control with only one PID loop for each angle. All real angle data are measure by IMU with a suitable filter. Image processing result: The following figure shows the results of image processing based on moment contour algorithm implemented on a Beagle Bone Black kit with OpenCV library and manage by ROS in Ubuntu Arm.
ACKNOWLEDGMENT The authors would like to express great gratitude to Assoc. Prof. Dr. Huynh Thai Hoang, faculty of Electrical and Electronics Engineering, Ho Chi Minh city University of Technology, for his advice and support throughout the progress of this project. Many thanks also go to the members of Student Science Research Club (PIF Club) for their support and encouragement in completing this project. R EFERENCES [1]
Derek B. Kingston and Randal W. Beard, “Real -Time Attitude and Position Estimation for Small UAVs Using Low-Cost Sensors”, Department of Electrical and Computer Engineering Brigham Young University, Provo, Utah 84602.
[2]
Ducard G. and D’Andrea R. , “Autonomous quadrotor flight using a vision system and accommodating frames misalignment ”, IEEE Symposium on industrial Embedded Systems, Lausnane, pp. 261-264, July 2009.
[3]
Federico Augugliaro, Angela P. Schoellig, and Raffaello D’Andrea, “Dance of the Flying Machines”, IEEE Robotics and Automation Magazine, 2013.
[4]
Federico Augugliaro, Ammar Mirjan, Fabio Gramazio, Matthias Kohler, and Raffaello D’Andrea, “Building Tensile Structures with Flying Machines”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
[5]
Dario Brescianini, Markus Hehn, and Raffaello D’Andrea, “Quadrocopter Pole Acrobatics”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.
[6]
V. Artale, C.L.R. Milazzo and A. Ricciardello, “Mathematical Modeling of Hexacopter”, Applied Mathematical Sciences, Vol. 7, 2013.
[7]
DIY Drones. Official ArduPlane repository, ArduPlane-2.75 package, October 28, 2013. https://code.google.com/p/ardupilotmega/downloads/list
[8]
Dominik Honegger, Lorenz Meier, Petri Tanskanen and Marc Pollefeys, “An Open Source and Open Hardware Embedded Metric Optical Flow CMOS Camera for Indoor and Outdoor Applications,” in ETH Z¨ urich, Switzerland.
[9]
R. B . R usu. “ROS - Robot Operating System” . Willow Garage, Inc, 2010.
Fig. 17 Center of marker.
The real image captured by camera is on the left side, and the image result after processing is on th e right side of Fig. 17 The rate of image processing speed can reach to 15 frames per second easily (one frame takes only 66ms to processing with Beagle Bone embedded board). VI.
CONCLUSION & FUTURE WORK
In this paper, the autopilot systems for multicopter are discussed in both hardware and software detail. The whole autopilot system includes several parts like observer, state estimator, flight controller, ground station, feedback method are described specifically. The autopilot system is being applied to a hexacopter, tracking a fixed marker in the ground with OpenCV and Beagle Bone Black. The final goal of this project is building an autopilot system with real-time image processing unit can track and follow an object, which can apply in spy, cinematographic, geologic observer or monitor purpose. In the future, the whole system will be optimized: follow trajectory, improve altitude error and angle error, complete stabilization module for camera, complete and improve autopilot system. From the basic problem of marker recognition, the algorithm can be developed to be object tracking or following an identified subject for inspection, observation or rescue missions.
[10] Kerr J,Nickels K., “Robot operating systems: Bridging the gap between human and robot,” Published in System Theory (SSST), 2012 44th Southeastern Symposium,pp. 99 - 104. [11] Gary Bradski and Adrian Kaehler, “Learning OpenCV: Computer Vision with the OpenCV Library” O’Reilly published. [12] Sergei Lupashin, Angela Schollig, Markus Hehn, and Raffaello D’Andrea, “The Flying Machine Arena as of 2010,” IEEE International Conference on Robotics and Automation, Shanghai International Conference Center, May 9-13, 2011, Shanghai, China. [13] QGroundControl. Ground Control Station for small air-land-water autonomous unmanned systems. http://qgroundcontrol.org/downloads and http://qgroundcontrol.org/mavlink/start.
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
Design Swing-Up and Balance Controllers for a Pendubot Van – Khoa Le Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected] adaptive control algorithm or hybrid algorithm to control the Pendubot. In the fuzzy control field, Xiao Qing Ma designed a fuzzy PD controller to swing up an under-actuated robot. However, some fuzzy controls are just not very effective because it depends on the number of rules. In this paper, the method of partial feedback linearization, which is discussed in [2] and [4], is applied. In section six and seventh,the simulation results in Matlab, the mechanic model of the real Pendubot and the experimental result are presented. Finally is the conclusion, recommendations for future work and reference.
— In this paper, a control strategy to swing up and Abstract
balance the Pendubot around the vertical position is presented. The system’s model, the design of the balancing and swing-up controllers as well as the state estimator will also be introduced in details. Experimental results are presented to demonstrate the success in implementation of the theoretical design in this paper. Keywords — nonholonomic systems, the Pendubot, underactuated mechani cal systems.
I.
I NTRODUCTION
The Pendubot is a two-link planar robot with a DC motor that supplies a torque to the first link, while the second link is able to swing freely around its pivot. Encoders at each joint provide measurements of the angular position for each l ink. The single control input leaves the system under-actuated and also imposes non-holonomic constraints on its dynamics, making the Pendubot a good test for studying the control of a broad class of nonlinear systems often found in science and engineering. It is a counter part of the Acrobot which also has two links mounted vertically, but unlike Acrobot which has the actuation at its elbow, the Pendubot is controlled at its shoulder joint. This makes the control of Pendubot is more simple when compare with the Acrobot, but all similar control issues can be applied and studied. For the purposes of this project, th ere are two missions that have to be accomplished. The first problem is to swing the Pendubot from the resting downward position (q1=π, q2=0) to its top position (q1=0, q2=0), and the second problem is to balance the system at this unstable position. In section two, the design of the mathematical model of the Pendubot base on Lagrangian dynamics is showed. After that the state equations of the Pendubot were found by linearizing the model around the upright position ( ) = (0, 0, 0, 0) In section three, the balancing control was applied for the linearized model by linearizing the system and designing a full state feedback controller (LQR) and by the neuron fuzzy controller. Section four discusses the control algorithms used to swing up the links to the unstable equilibrium position. In the classic control field, Spong [5] applied partial feedback linearization to swing up the two links from hanging straight down to standing straight up. KANEDA [6] used energy-based methods to complete the swing-up control of an inverted pendulum. In addition, some researchers proposed the robust
II.
MATHEMATICAL MODEL OF PENDUBOT
Fig.1: Pendubot
Considering the simplified Pendubot system in Fig.1. The generalized coordinates q1 and q2 describe the angle between the horizontal plane and link 1 and the angle of link 2 relative to link 1, respectively. Each link has mass , total length , and distance from pivot to center of mass , while the gravitational acceleration is denoted by g. Using Lagrangian dynamics and neglecting friction one can derive the following matrix form of the equations of motion for the Pendubot system.
̇ ̇
̈ ̇ ̇ ̇
(1)
In this equation, M (q) is the inertial matrix, D (q, ) is the vector containing the Coriolis and centripetal terms, G(q)
34
contains the gravitational terms, and vector
III.
is the control input
THE BALANCING CONTROLLER
̇ ̇ ̇ ̇ ̇ ̇ ̇ ̇ ̇ ̇ ̇ ̇ ̈ ̈ ̇ ̈ ̈ ̇ ̇ ̇̇ ̇ ̇ ̈ ̇ ̇ ̇ ̈̈ ̇̇ ̈ (( ̇ ̇ )) ̇ ̇ ̈{ [ ] M(q) =
A. LQR controller
D(q,
Linearizing the Acrobot dynamics about the vertical equilibrium (q1, q2, ) = (0, 0, 0, 0). Where the state vector: (9)
(2)
(3)
(4)
u=
Parameterizing the physical quantities by defining the parameters as: as:
(5)
Where:
The equations reduce to: M(q) =
(10)
(6)
D (q,
(11)
(7)
(12)
A=
We have the state equations:
̇̇ ̇̇ ̇ ) ̇ ̇ ̈ ( { ̇ ̈ ( ̇ ) ̇ (8)
The linear quadratic regulator (LQR) is an optimal statefeedback controller that minimizes the quadratic cost criterion J:
∫
(13)
The weighting matrices Q and R are designed and modified to achieve an acceptable tradeoff between performance and control effort. In practice, matrices Q and R can be chose with some conditions such as: 1/maximum accepted value of 1/maximum accepted value of
By several simulation trials, the best Q and R are found as following:
R =1
Using the parameters in table 1 results in th e controllable linear system: Input variable q1
P1 0.0799
P2 0.0244
P3 0.0205
G1 0.0107
We have:
A=
G2 0.027
Input variable q2
B=
By solving Riccati equation, we obtain th e gain vector:
The above gain vector that is used in state feedback control to balances the manipulator at the right position. The input torque is calculated by the formula: u(t) = -K*x(t). Input variable q2dot Fig3: Membership function of fuzzy con troller inputs
Fig 4: Membership function of fuzzy contro ller output
Fig 2: LQR feedback control
B. Fuzzy controller Three inputs are used for the fuzzy controller. That is q1, q2 and q2dot. The first link is controllable so that the q1dot can be ignored, because that that will increase more rules and make the controller more complicated.
Rules: Q1 ZE ZE ZE ZE ZE ZE ZE
Q2 ZE ZE ZE PO PO PO NE
Q2dot ZE PO NE ZE PO NE ZE
U ZE PM NM PM PM ZE NM
ZE ZE PO PO PO PO PO PO PO PO PO NE NE NE NE NE NE NE NE NE
NE NE ZE ZE ZE PO PO PO NE NE NE ZE ZE ZE PO PO PO NE NE NE VI.
PO NE ZE PO PO NE ZE PO NE ZE PO PO NE ZE PO NE ZE PO NE ZE PO NE
̈ ̈ ̇ ̇ ̇
ZE NM PS PB NS PB PB NS NB PM NB NS PS NB PB PB NB NB PS NB
(19) Where are desired angular acceleration, velocity and position of the first link. The values of the gains Kd and Kp are set: Kd = 27.5 Kp = 350
Fig 5: Linear control feedback
SWING-UP CONTROLLER
Feedback Linearization Technique is applied. As discussed before, the equations of motion for two links underactuated underactuated manipulator are given by:
̈̈ ̈̈ ̇̇ ̈ ̈ ̅̈ ̅ ̅ ̈ ̈ ̇
V.
Solving equation (15) for the angular acceleration of link two gives:
(16)
Substituting (4.3) into (4.1) yields:
However, the first link can get to the desired position q1 = 0 but the second link can not reach q2=0. We have h ave to pump the energy to the link 1 in a short time before the feedback linearization control take action. After several trials, an input torque of 1.5 N.m in 0.5 second is effective to swing the second link to the desired position. SWITCHING FUNCTION
A switching function is designed by the condition of the angle q1 and q2. It is used to change the control of the Pendubot from swing up to balance. Condition: if |q1 < π/10| and |(q2 |(q2 + q1) < π/10| Then U = u_balance Else U = u_swingup
(17)
Where:
VI.
SIMULATION R ESULTS ESULTS
In order to simulate using Matlab Simulink, The system parameters in table table 1 are used.
The input torque is d efined as follows:
(18)
where v1 is the linear controller to be designed. Then the dynamic equations of manipulator become:
Let v1 be defined a s a PD controller:
Fig.6: Pendubot model when input = 0
When input torque equal zero, the Pendubot return to the stable balance downward position (q1 = π, q2=0).
Pendubot balance by fuzzy controller, the system is not as stable as LQR algorithms because the rules are not good enough. However, the control range of the fuzzy controller is bigger than LQR.
Fig.7: Pendubot balance by LQR algorithm Fig.8: Swing up proces s without an input torque
Balance the Pendubot at the top position using LQR controller, even though the second link of the Pendubot is inclined an angle π/15, the Pendubot still able to return to the top position (q1=0,q2=0).
Fig.8: Angular position q1 of fuzzy co ntroller
Fig.9: Swing up proces s within an input torque
Fig.9: Angular position q2 o f fuzzy controller
Swing up Pendubot without pumping energy, the first link get to the desired position (q1=0, q2=0) but the second link couldn’t. After pumping energy to the first link, the Pendubot can get to the top position in a short time.
Fig.10: Angular velocity q1dot of fuzzy co ntroller
Fig.11: Angular position q2 dot of fuzzy controller
Fig.10: Swing up and balan ce of a Pendubot
Fig.12: Output of fuzzy controller
The completed swing up and balance control of the Pendubot, using the feedback linearization control to swing up the Pendubot from the bottom position (q1= π, q2=0) to the top
equilibrium position (q1=0, q2=0) and balance it at that position by LQR control. VII.
EXPERIMENTAL R ESULTS
Fig 11: The Pendubot drawing in Inventor a nd the mechanic model of Pendubot
VIII. CONCLUSION This paper considers the control of an under-actuated mechanical system: swing up and balance a Pendubot at the top position. Feedback Linearization methods are used to swing up the Pendubot to the desired position and LQR method is applied to stabilize it at the equilibrium point. Some conditions imposed on the angular positions of the two links were used to switch between the two controllers. The achievements of this paper are: modeling equations mathematic of a Pendubot, applying LQR algorithms, fuzzy controller to balance and linearization feedback control to swing up that system, testing on real mechanic system of Pendubot. Feedback linearization and LQR methods are perfect for ideal cases. However, they are very sensitive to the system noise and external disturbances. They could not stabilize the manipulator at the upright position in the presence of a certain levels of noise and disturbances. Neuro fuzzy balancing controllers are successful techniques in balancing the manipulator at upright position. But they are depended on the rules. R EFERENCES
Fig 12: Angular position
Fig 13: Angular velocity
Fig 14: Output control
This final chapter shows actual responses of the Pendubot system. The response of the real Pendubot is very close to the simulation. However, unlike the simulation, the pumping energy is not necessary because the inertia of the second link is big, the Pendubot can get to the top by just one swing, then if stabilize at that position very well.
[1]
Xiao Qing Ma, Fuzzy Control for an Under-actuated Robotic, a master thesis in department of mechanical engineering, Concordia university, Quebec, Canada, 2001
[2]
Daniel Jerome Block, mechanical design and control of the Pendubot, master thesis in General engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 1996
[3]
Patrick Sheppard , Swing up control of the Pendubot
[4]
Joudeh Yasin Abed, Comparative study of control strategy for underactuated manipulator, Master of Science thesis, American University of Sharjah, 2000
[5]
Mark W.Spong, The swing up control problem for the Acrobot, 1995
[6]
Kaneda , The swing up control for the Pendubot based on energy control approach, Faculty of Computer Science and System Engineering, Okayama Prefectural University, 2002
ORAL SESSION B
Design Video Doorbell Systems Van-Thang Vuong Department of Electronics Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected] Abstract — In this paper a Video Door Bell is described, the system consists of a Leopard board and a PC or Tablet. The main purpose is to build a Video Doorbell system that can get the high definition (HD) video at the 1280x720 resolution with high fidelity audio simultaneously and transfer them over the network in real-time. The project was developed using a Software Development Kit provided by Texas Instruments and QT-creator to generate the Graphic User Interface (GUI) on PC. Besides, a GUI was also created for the Android device. Keywords — Leopard board; QT; TI SDK; gstreamer
V.
processor board based on SoC, TMS320DM368, and which include a VGA resolution video capture system. The board also has an Ethernet port, USB 2.0, JTAG and serial ports for debugging, SD memory slot, stereo audio I/O, expansion connector, composite video TV output, LCD/DVI interface. Leopard board is a project hardware open source. So you can custom board that can be compatible with your application or what you want to do.
I NTRODUCTION ( Heading 1)
Video doorbell system is one of real-time application. When the society develops to a modern one, such applications like video doorbell system will be help us to have a convenient, luxurious, secure daily life. Video doorbell system is not a popular application. But in Vietnam, this product is only be imported, companies only trade, construction, installation. So, this project will contributed into mater a technical problem that call name “live streaming”, after that, we can modify, inset, remove, improvements, development ….what I want, both hardware and software. There are some of demo project, one of them in present [6], but in this one, it has big delays when streaming video over network up to 30 seconds. And it has only streaming video, not for audio. One reason is related to hardware, the use of usb camera will make big delays. The video data samples can be directly transmitted via networked environment like socket or through high-level performance like protocol RTP. We can notice that using the second method has many advantages such as transmission efficiency because it is designed for transmit dynamic data, and it has strong support for synchronizing data from difference source, in addition to provide management services and quality control of data transmission. Therefore, in this project, RTP protocol will be used as the solution. VI.
Figure 1. The DM368 Leopard Board and Peripherals
B. Camera Module LI-5M03 Product introduction: Leopard Imaging (LI) provides 5 Mega-pixel Camera Board, which has 5M resolution compact size modules on board Camera Board main features: using CMOS sensor MT9P03, support 720p with 60 fps (frame per seconds), output data format is RGB and it communicate with Leopard board via I2C standard. In Figure 2 is actual board.
HARDWARE DESCRIPTION
A. DM368 Leopard Board The Leopard board DM368 is a high performance – low cost development board that utilizes an ARM architecture
Figure 2 LI-5M03 Camera Board
Figure 3. System Block Diagram
VII.
SYSTEMS BLOCK DIAGRAM
import your prepared text file. You are now ready to style your paper; use the scroll down window on the left of the MS Word Formatting toolbar.
A. Block diagram In Figure 3, it described system connection on two side, Home side and Gate side. Most connected devices on 2 side are similar. B. Project description Gate side: Microphone will be connected to board through audio amplifier circuit, this device will receive speech from people. Similar, Speaker is also connected to Voice Codec on DM368 which will pronounce. Camera is connected to board through Video Codec, It will take a picture, video input for systems. “Door button” is button which will tell house owner that they have guest outside. Ethernet port will be used for networking. Home side: Microphone, Speaker and Ethernet port have provided similar functionality as Gate side. The screen will be used to display video when PC received video from leopard DM368.
C. Algorithm Flowchart On the Graphic User Interface, we have 2 buttons name “StartTalk” and “ StopTalk”. This GUI is written by QT creator. 1. On the PC: The flowchart is described on the figure 4, when system starting, the “Video Streaming Receive” application is also running concurrently. The task of this application is getting video from leopard DM368. Next, we will open a server contain IP and Port, then wait for request from clients. If it have any client connect to PC, the communication process will start. After the text edit has been completed, the paper is ready for the template. Duplicate the template file by using the Save As command, and use the naming convention prescribed by your conference for the name of your paper. In this newly created file, highlight all of the contents and
Figure 4. Flowchart for algorithm on PC
If user clicked on “StartTalk” button on GUI, both ” Audio Capture” and “Audio Receive” application will be run concurrently. The “ Audio capture” application will capture speech from microphone and send to Leopard over network. ” Audio Receive” application will pronounce audio when it received audio data from Leopard. If user clicked on “StopTalk” button, both two applications mentioned above will be kill. Then send character “S” to leopard so that Leopard is stop. Until here, communication process is terminated. 2. On the Leopard board DM368 The flowchart is described on the figure 5, The first thing, it will run “Video Streaming App” that capture video
from camera, encode and put video data into RTP package and send to PC over network. Details of this application can be found in the next section. Then, it will connect to server with match IP and Port. When the connection is successful, communication process will start. If Leopard received character “S” (stop signal), both two applications mentioned above will be kill. Until here, program is stopped and communication process is terminated.
“ Audio capture” application will capture speech from microphone and send to PC over network. ” Audio Receive” application will pronounce audio when it received audio data from PC. Camera capture ================================ (/dev/video0)
Processing data =================================== Set resolution: 1080x720 or 1920x1080 Bit-rate: max = 2000000 Set Profile: High Set Level: 3.1 Set mode entropy: CAVLC Set chroma: ….. Se t ill umi na tion: ……. other parameter
Video Encode ----------------------------------------------------------------- Put data follow into buffer that call name hOutBuf
RTP H.264 PACKET ====================================== hOutbuf will be puted into RTP data of RTP packet H.264 as belows: ----------------------------------------------------------------| | RTP Header RTP Data | | -----------------------------------------------------------------
Send to network using RTP protocol
Figure 5. Flowchart for algorithm on Leopard VIII. TI SDK DESCRIPTON To automate the compile process and the selection of driver and software modules TI SDK used. It provides an interface to select a huge variety of setting and implements the compile process using auto-tools. The main function of the provided SDK is to reduced the time to market period to the TI client, that is achieved by Modular design that makes the insertion of open sources and custom application easier, Integration and all the benefits implied in having all the information in a single place. Using of open source projects. And as below in figure 6, I’m using TI Digital Multimedia Application Interface (TI DMAI) into TI SDK to get and process video. If “ Door button” is pressed, it will send a “ PingPong.mp3” file to PC. Simultaneously, both “Audio capture” and “Audio receive” application will be run. The
Figure 5 Video Processing
IX.
GSTREAMER DESCRIPTION
Gstreamer is an open source multimedia framework constituted of several libraries used for audio playback, video playing, audio mixing, non-linear video editing and more. Gstreamer includes many kinds of filters and codecs and also it's possible the development of new ones by writing plugins with a clean and generic interface. Some major characteristics of Gstreamer are the multiplatform support, for example Linux on x86, PPC and ARM using GCC, Solaris on x86, MacOSX, Microsoft Windos using MS Visual Developer and others. Also, it has a comprehensive core library, broad coverage of multimedia technologies, several container format (asf, avi, mp4/mop/3gp, flv, mpegps/ts, mkv, mxf, ogg) and streaming formats (http, rms, rtsp). Extensive Development tools, for example gst-launch command line tool, or graphical editor.
Fi ure 7. Gstreamer Pi eline for Sender
Fi ure 8 Gstreamer Pi eline for Receiver
Fig. 7 and Fig. 8 describes the sytem in this project. Fig, 7 describes the gstreamer pipeline for sender. First, it will capture speech from Microphone using “alsasrc” element and encode audio to “mulaw” audio compression standard , put into RTP data with “ pcmu” audo compression standard. After that, it will send tahat to receiver. In the Fig. 8, after receiving audio from network, it will decode with an audio compression standard match for RTP and filter. Then, “alsa” sink element will push audio data for speaker. X.
The version of Qt that was used to cross-compile the generated code to an ARM is qt-embedded-4.6.2 and to compile the code to the host qt-4.6.2, the developed environment of Qt Creator was used to make a GUI. XI.
IMPLEMENTATION RESULTS
A. Video quality
QT-CREATOR DESCRIPTION
Qt is a multiplatform framework that can be used in a cross-platform, even an embedded system, without making any changes to the code. It is a library that has several construction blocks to make the process of creating an application and a user interface easier. Trolltech was the original developer of Qt until Nokia buy it. Nokia also offers an open source integrated development environment named Qt Creator which was used to make the project described in this document.
Figure 6. In the Lab116B1 room: 1280x720 resolution
Qt offers high runtime performance and small footprint on embedded systems. It is native developed in C++ but it can be used in other programming language. This framework is used in many open source project, it is distributed under GPL, LGPL and proprietary licenses.
Figure 7. In the Lab203B3 room
B. Audio quality Transmission process audio from PC to board and from board to PC: the sound quality is quite honestly, with small distortion. C. Graphic user interface On the PC: Friendly interface, easy to use.
Figure 9 GUI on Tablet
XII. [1]
Leopardboard – DesignSomething home page. [Online]. Available: http://designsomething.org/leopardboard/default.aspx.
[2]
Texas Instrument Corp TMS DM [Online]. Available: www.ti.com
[3]
Texas Instrument Corp Available: www.ti.com
[4]
Texas Instrument Davinci Multimedia [Online]. Available: www.ti.com
[5]
Wim Taymans Steve Baker Andy Wingo Ronald S Bultje Stefan Kost “ Gstreamer Application Development Manual , [Online]. Available: www.gstreamer.freedesktop.org “
[6]
Dr. Derek Molly, Streaming Video Using RTP on the BeagleBone Black, [Online]. Available: http://derekmolloy.ie/streaming-videousing-rtp-on-the-beaglebone-black/
Figure 8. Software on PC
On the Table (run Android OS):
R EFERENCES
_Software_Developers_Guide.
dvsdk_dm368 _Release_Notes. [Online]. Application
Interface.
Application of Wireless Sensor Network and TCP Socket Server in Smart Home Thanh-Tan Pham
Nhut-Huy O
Hoang-Phi Le-Nguyen
Department of Electronics Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Electronics Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Telecommunication Faculty of Electrical and Electronics Engineering Ho Chi Minh City University of Technology
[email protected]
— This paper introduces the wireless sensor Abstract network routi ng algori thm - AODV, the TCP Socket server and their appli cation in Smart H ome. In t hi s paper, the main focu s is to put on a general system which wil l be a basement for developin g Smart H ome in the fu tur e. There are thr ee mai n part s of t he system, incl udi ng H ome Zi gbee Network , TCP Server Center and User Application for Android smart phone and personal computer. Keywords — smart homes, Zigbee, TCPIP, Sockets. I. I NTRODUCTION Contemporarily, science and technology are being applied widely and practically to improve human living standard. In numerous fields of technology, researchers are giving attention on helping people to have more convenient and more efficient life. Therefore, there is a huge demand for systems which can support people to control their houses effectively. As a result, projects which focus on building houses with the application of high-technology are named “Smart Home Projects”. A general Smart Home includes four main parts: home network , application, server center and user application. The home network includes: sensors, electric devices, actuators and cameras. The application includes: applications that can be integrated from the home network and is the interface of home network to server so that the server center only cares for the data provided by Applications. Server center is responsible for controlling applications and communication to user application. User application is responsible for receiving the requests from users and updating the Smart Home’s condition.
Fig. 1. Smart Home’s General structure
This paper will focus on building a basic Smart Home system. There are three mainly discussed parts, including Home Network, Server Center and User Application. Therefore, this paper will be organized in the following order: Section II: System design will describe solutions and functions of the System that our team built. Section III: Home Network will discuss the algorithm for Home Network, some bugs happening while developing and solutions. Section IV TCP Socket – Server & User Application will concentrate on the frame-work used in Server and User Application. Furthermore, there will be some instructions for building Server on BeagleBone Black.
II. SYSTEM DESIGN
III. HOME NETWORK A. Basic Theorem:
Phone
User Computer
TCP/IP
UART Home Server Beaglebone Board
Coordinator Zigbee-MSP
Household Device Node 1 Zigbee-MSP Node ... Zigbee-MSP Household Device
Node 3 Zigbee-MSP Household Device
AODV is Ad hoc on-demand distance vector routing protocol. Ad hoc network, or Mobile Ad-hoc Network Magnet, is a system of wireless nodes which can be free and flexible to structure a temporary network model. There are 2 major routing groups of Ad hoc including Proactive and Reactive. Proactive Routing protocols: The advantage of this protocol is that at any moment, at any node in network, transceiver information is always prepared to response. It leads to the reduction of delay for an a pplication in which people need the high speed in communication. When using this protocol, each node has a number of routing tables to reserve the routing path to the rest of nodes. However, the serious backward of this method is the significant increase of updating due to many nodes or unbalanced location of nodes. Furthermore, because of reserving other nodes’ data, the processing process is possibly slowed. Reactive Routing protocols: The reactive protocols only route to nodes in network in necessary condition. Each nodes control its routing table in which the routing path to suitable nodes is reserved. When the routing path is required from Application layer, the routing process will be established if in routing table, there is no entry for this path. The characteristics of the reactive protocols are: Path Discovery Path maintenance Path deletion The process of finding path depends on request and reply cycles. The source nodes need to send data to give out a request for path and broadcast it to network. There is definitely one reply at least that receives the data and replies to coordinator node. The maintenance process depends on its condition. If path is not used in a period or due to disconnection, the routing step will be done again. In many cases, the transmitting/receiving path will be deleted if not used. AODV is a reactive protocol that has components: RREQ package RRER package RREQ-ACK package
Node 2 Zigbee-MSP
Household Device
Fig. 2. Smart Home System
The previous figure shows the design for Smart Home System mentioned in the Introduction. This system has 3 main parts: Zigbee Network, Server Center and User Application. Zigbee is new high level communication protocol which is applied in RF Communication that needs Low Power. It is based on the IEEE 802.15 standard. Zigbee has its own protocol, easy to setup and appropriate cost. Therefore, Zigbee is chosen, due to its mentioned advantages. The Home Network takes the responsibility of capturing and updating data for Server, such as light and temperature, receiving commands from Server to control electric devices, such as lamp or fan. In this System, Zigbee Network is routed with AODV Algorithm. Server Center concerns on receiving and analyzing the data receiving from Zigbee Network. In addition, Server Center is also responsible for communicating with User connecting to Server. The Server will be programmed in Java and connection of Server and User Application is TCP Socket Communication standard. Furthermore, the mini server running the Server Software is BeagleBone Black. User Application is application on Android Smart Phone or Personal Computer. It provides the friendliest environment for user s to update house’s condition or control house’s devices.
B. AODV:
RREQ (Route Request)
TYP
I
Destinatio
E
D
n IP
Destinatio n Sequence
Origina l IP
Original Sequenc e
Where: TYPE = 0x01 (1 byte) : identify package RREQ. ID (1 byte) : mark the number order of RREQ to avoid infinitive broadcast.
Destination IP (2 byte): the address of required node. Destination Sequence (2 byte) : the sequence value of required node. Original IP (2 byte) : the address of node which orders. Original Sequence (2 byte) : the sequence value of node which orders.
Receive RREQ
Reverse Route
Check Org IP in Routing Table(1)
Create new Entry Des IP ß Org IP Node Seq ß Org Seq Node Next Hop ß Previous Node
N
Y Node Seq < Org Seq Y Node Seq < Org Seq
N
RREP (Route Reply)
Node Next Hop ß Previous Node (2)
TYPE
Destination Sequence
Destination IP
Main Route
Original IP
Y
Check RREQ already receive? (Orig IP & ID) (3)
Discard RREQ
N
Where: TYPE =0x02 (1 byte) identify package RREP Destination IP (2 byte) : ): the address of required node. Destination Sequence (2 byte) : the sequence value of required node Original IP (2 byte) : the address of node which orders.
TYPE
Original IP
Status
Destination IP 2
Status
Destination IP
Status
Status
RREQ:
Seq
Y
N Own Seq++ Node Seq< Dest Seq
Y
Node Seq
ß
Des Seq
N N
Own Seq==65535
Know route to Des IP Send RREP
Fordward RREQ
Fig. 3. RREQ package flowchart
RREP: Receive RREP
Next Hop
Destination Sequence
ID
Next Hop
Destination Sequence
ID
Next Hop
Destination Sequence
ID
Destination Sequence
ID
Next Hop
N
ß Des
Check Des IP in Routing Table (5)
N
C. AODV Programming Flowchart:
Own Seq
END
Check Org IP in Routing Table
N
Own IP == Org IP?(1) Y
Y Check Dest Ip in Routing Table? (2) Forward RREP
N
Y Node Seq < Des Seq Y N
.... Destination IP n
N
This is Destination Send RREP
Routing Table
Destination IP 1
Check: Own IP == Des IP ?? (4)
Reset Own Seq=0 Reset Routing Table
Where: TYPE =0x02 (1 byte) identify package RRER Unreached Destination IP (2 byte) node’s address is unreachable Original IP (2 byte) the address of node which orders.
Y
Y
Y
RRER (Route Error)
Unreached Destination IP
O wn Se q < De s S eq
Node Seq ß De s Seq Node Next Hop ß Previous Hop
Route Found !! Send Data to Des Ip
END
Fig. 4. RREP package flowchart
Create new entry
IV. TCP SOCKET – SERVER & USER APPLICATION A. Home Server Center Design: The general process: Firstly,
the
HomeAutoServer
initiates
2
objes
HomeAutoSocketHandler and HomeAutoUartHandler. Next, each
objects
will
initiate
HomeAutoDataStream
and
HomeAutoUartStream to control input and output data of Server and User Application or Server and Zigbee Network. After
that,
through
HomeAutoDataStreamListener
and
HomeAutoUartStreamListener, object controlling data will bring received data to objects which controls connection. Fig. 5. Server Center General process
B. User Application:
The User Application has the same frame-work as Server Center. Thus, the most important thing in User Application is friendly user interface to user.
START
Open Socket to Server N
ACKNOWLEDGMENT
Log In Success?? Y Listen Mode
Y
Control Action from User (1)
N
Y
Socket Opened ??
Receive Data from Server (2)
N Write Control Command to Server
Socket Opened??
Open Socket to Server Process Data
Great thanks for my advisors, Dr. Trương Quang Vinh and Mr. Tạ Trí Nghĩa, lecturers of Department of Electronics Engineering for graduation thesis for their Invaluable Dedication and suggestions. Also, we want to send our thanks to “Pay It Forward Research Club” for this precious Student Research Conference. R EFERENCES
Time out ??
N
Y Close Socket
Fig. 6. User ApplicationGeneral process
[1] An Overview of Mobile Ad Hoc Networks: Applications and Challenges [2] ZigBee for wireless networking [3] Elliotte Rusty Harold (2005). Java™ Network Programming , Third Edition. O’Reilly Media. [4] Ad hoc On-Demand Distance Vector (AODV) Routing . Available: http://www.ietf.org/rfc/rfc3561.txt [5] Java
SE
Tutorial.
Available
:
http://docs.oracle.com/javase/tutorial/index.html [6] BeagleBone Black System Reference Manual. Available :
http://elinux.org/Beagleboard:BeagleBoneBlack#LATES T_PRODUCTION_FILES_.28A6.29 .
Display video on LED Matrix RGB 64x128 using kit DE0-Nano and BeagleBone Black Thanh-Phong Do Department of Electronics Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected]
Abstract – This paper presents the design, implementation and
experimental results of an LED matrix display on a powerful hardware which is a combination of the DE0-Nano FPGA and BeagleBone Black (BBB) platform. The input video will be processed with the OpenCV library embedded on a BBB kit and the output data will be used to update the internal RAM of DE0Nano before displaying on the LED matrix module. Keywords – Computer V ision, RGB L ED , video display, video processing.
I.
I NTRODUCTION
Nowadays, LED is becoming more and more familiar with daily life. When we are outdoor, it is very easy for us to see LED matrix display panel in the purpose of decoration or advertisement. Low power consumption as well as numerous colorful effects still promise a potential for this to keep develop in the future as Before this paper, there were several researches for this field. There were two types of LED modules. The first one is the advertising displays which are used to show some characters on the screen. These modules used monochromatic LEDs so that they could only make some simple effects and could not display video. The second one was used to demonstrate video or highly visual effect on the musical stage. This module requires a PC to process video, thus, it could not be a good choice for mobile and minor application. Consequently, this paper depicts a new method to display video on LED matrix module which can be utilized to overcome the drawbacks of the mentioned methods.
II.
BACKGROUND THEOREM
A. RGB LED Matrix Module Each LED Matrix RGB module has 2 controlled blocks like in figure 1 with the resolution of 32x32 pixels. To obtain the resolution of 64x128, 8 modules were used in this paper.
Figure 1 Block diagram of LED Matrix RGB module
Each controlled block of module is responsible for controlling 16 rows and 32 columns. It contains two blocks, Row Control and Column Control. Row Control selects row which is used to display on the screen by choosing the MOSFET to supply power for it. At one moment, only one row is selected by Row Control Block. At this time, data is shifted into the shift register IC included in Column Control Block to display the RGB values of this selected row. After that, the next row is chosen until all rows are displayed completely. The refresh rate is defined as the frequency which the whole screen is shown. Since one row is displayed at one moment, all the screen is displayed thank to high refresh rate. All the screen has to be swept continuously with high speed. High refresh rate means the frequency which human’s eyes cannot recognize the sweeping process. Human’s eyes can only see it a static displayed screen. The higher the refresh rate is, the smoother the video is being displayed on the screen.
Figure 3 Pulse width modulation
Figure 2 Block diagram of shift register IC CYT62726
Figure 2 shows the block diagram of one shift register IC contained in Column Control Block. Each CLK is generated, data is shifted into buffered register of this IC at SDI input. When all the data is shifted completely, the LatchB is created to make the buffered register push these data to the output of the IC. Each IC has 16-bit buffered register inside so the abundant data is shifted out through SDO output. This characteristic can be used to enlarge the module LED for higher resolution. With the 32x32 LED Matrix RGB module, it contains 6 CYT62726 ICs. Each two ICs are responsible for controlling one color of 32 LEDs in one row.
LED displaying uses this method to control the bright of the LED. In RGB LED, each LED is a combination of three individual LEDs which have red, green and blue color to form a pixel. By controlling the bright of 3 LEDs, the desired colored can be reached. To implement the PWM displayed method, a period has to be created and the width has to be adjustable. 16-bit RGB data is used for one pixel of LED matrix RGB module (5-bit for red, 6-bit for green and 5-bit for blue). Thus, the voltage is split into 63 levels. Each row has to be swept 63 times. During the sweeping process, the pixel data value is compared with the sweeping time counter. If the pixel data is equal or higher than the counter value, 0 is shifted into the shifted register IC and vice versa. After one row is displayed 63 times, the next row is selected and the process continues. III.
OVERALL SYSTEM
B. Programable Real-Time Unit (PRU) PRU is another core of the BBB along with its main core, ARM. The reasons that PRU is chosen for programming are its ability to program at low level and its timer. BBB needs Ubuntu to use OpenCV library for processing the video. Ubuntu only allows to program at high level so that when using interfaced standards like UART, SPI, it costs too much time. This is not suitable for transmitting pixels data to FPGA system because this process needs to operate at high speed to ensure the frame rate. With PRU, assembly is the only language which can be used to program it. Each assembly line code operates with the speed of 200MHz. The frame rate issue can be overcome easily. Moreover, PRU also provides timer which can be used to guarantee the changing rate of picture on the screen follows the frame rate supported by the video. C. PWM Displayed Method Pulse Width Modulation (PWM) is a method in which various average voltages can be obtained by adjusting the pulse width in one period to make different duty cycles. The higher duty cycle is created, the higher value of average voltage is gained. Thus, instead of only having two logic levels, in the voltage range from 0 to power supply voltage, many logic levels can be achieved.
Figure 4 Overall system block diagram
Figure 4 shows 4 components of the system which are PC, BBB, FPGA system and LED Matrix RGB. PC is connected with BBB through USB and with FPGA system through USB Blaster standard. It is in charge of programming and debugging these two systems. BBB is used to process the input video to update the internal RAM of FPGA system. FPGA system’s function is loading pixels data from internal RAM, which is updated by BBB, to display it on LED matrix module. LED matrix module is the screen for video to be displayed.
IV.
DE0-NANO IMPLEMENTATION
A. The Overall Block Diagram Of FPGA System
Figure 5 Block diagram of overall FPGA system
Based on figure 5, the FPGA system takes signals from BBB as its input and output the controlled signals for Led matrix module. B. The Detailed Block Diagram Of FPGA System
occurs. Then one pixel value will be stored in each RAMbuffer’s address after each MCUstrobe. MCUstrobe will also be used to increase the address register in the WriteControl Block in order data not to be alias. When the write process completes, MCUwrite will be set to 1. Moreover, a writestrobe, which is MCUwrite signal, is also used to notice the Read/Write Control Block that the write process ends. The Read/Write Control Block is in charge of interchanging the functions of these two Rambuffers when data are updated completely. It operates following the state machine. At first, state 0, if there is a negative edge from writestrobe (write process occurs), the state machine will switch to state 1. At state 1, it waits for a positive edge from writestrobe (write process ends), and then state machine will switch to state 2. Next, if there is a readstrobe notification from FSMread, the state will change to state 3, which is the state that interchanges the Rambuffers’ functions. Finally, the state machine will change to state 0 again after one system clock. Address Control Block is the block that distributes the addresses generated from WriteControl and FSMread to their proper Rambuffers.
V.
Figure 6 Block diagram of FPGA system in detail
Figure 6 shows the block diagram of FPGA system in detail. The system contains 5 major blocks which are BBB Interface, Control Block, LED matrix interface and 2 Rambuffers. These 2 Rambuffers of FPGA system have the opposite functions at one moment. If Rambuffer 1 is in read mode, Rambuffer 2 will be in write mode and vice versa. In read mode, Rambuffer will be controlled by FSMread, which is located in Led matrix interface block, to load the RGB values of the pixels inside it, convert them to gamma and display them on the screen. Gamma is the PWM value of the LED. Since PWM value and data pixel are not linearly related, a convert table is established in Convert Block to overcome this matter. Gamma values are compared with sweeping counter in Compared Block to create the output signals to shift into shift register ICs. The shifting, latching and output enable controlling process are also under the controlled of FSMread. When all the screen is displayed, FSMread creates a readstrobe to Read/Write Control Block. In write mode, Rambuffer is controlled by WriteControl module in BBB interface block. This block interacts with BBB to update the values of this Rambuffer. The process follows these steps. At the beginning of write process, MCUwrite is cleared to 0 to notice the FPGA system that the write process
BEAGLEBONE BLACK IMPLEMENTATION
As mentioned above, implementation of BeagleBone Black includes 2 stages. The first stage is a code for the main arm core, which is used to process the input video and the second stage is programming PRU. The function of PRU is interfacing with FPGA system to update new frame’s data for FPGA system. A. Data Flow Of ARM Core Figure 7 shows the data flow of BeagleBone Black’s ARM core. PRU and ARM core use PRU’s memory for the interaction between them. PRU0’s memory, PRU1’s memory and shared memory are 8KB, 8KB and 12KB respectively. For conveniences of the storing process, the first 2048 pixels data (4KB) will be stored in PRU0’s memory, the next 2048 pixels data (4KB) will be stored in PRU1’s memory and the last 4096 pixels data (8KB) will be located in shared memory. At the beginning of the program, essential variables are declared. They are variables which are used to control loop or store the computed data (like frame rate, pixel data). PRU is also activated and frame rate is read from the input video to calculate the PRU’s initial timer value. Later, the value is transmitted to PRU0 memory right after the address of the first 2048 pixels data. The infinite loop will be exited if there is no frame to be captured. Afterwards, the PRU is deactivated and the program ends.
B. Data Flow Of PRU
Figure 8 Data flow of BeagleBone Black's PRU Figure 7 Data flow of BeagleBone Black's ARM core
The next step is an infinite loop which is used to capture frame from input video and process this frame. The process includes resizing this frame to the required resolution (64x128), storing data to PRU’s memory and waiting for PRU’s interru pt. The PRU’s interrupt occurs when it have finished transmitting all pixels data of the frame to FPGA system. Then the ARM core clears the interrupt and continues to capture the next frame.
Figure 8 shows the data flow of PRU. Firstly, the PRU timer is initialized by the value received from ARM core. After that, MCUwrite , which is a pin of BBB controlled directly by PRU, is cleared to 0 to begin the updating process. Then r6 register is used to control the memory loading process. If r6 equals to 0, 1 and 2, the pixels data will be loaded from PRU0’s memory, PRU1’s memory and shared memory respectively. One MCUstrobe is generated after each pixel data is loaded from memory to write this pixel data to FPGA system’s internal RAM. When r6 equals to 3, which means all the pixels data is loaded completely, MCUwrite is set to 1 to end the updating process and an interrupt is sent to ARM core to notice it. Finally, PRU will wait for the timer to overflow before continuing to update the new frame. During the time waiting for the timer overflows, the ARM core is doing the procedure of capturing new frame and process it.
VI.
EXPERIMENTAL R ESULT
Figure 9 FSMread waveform
From this simulation, the refresh rate can be achieved by these following calculations A row needs 128 shift clocks to be swept one time and 6 system clocks for latch and output enable controlled signals. One row needs to be swept 63 times and 16 rows have to be swept to make one refresh cycle so the overall needed system clock are (128 x 2 + 6) x 63 x 16 = 266112 clocks. The used system clock was 50MHz so the refresh rate is
ACKNOWLEDGEMENT A special thanks to Mr. Bui Quoc Bao, my supervisor, my lecturer for graduation thesis at the university for his helpful guidance and advices. Thank to Mr. Do Thai San, my brother, who helped me to optimize my researched product. R EFERENCES [1]
This result is rather higher than the frequency which human’s eye can detect the sweeping process so it was a very acceptable result. VII.
DISCUSSION
After considering carefully the result, besides many advantages like the quick refresh rate (nearly 190 Hz), the non-error video display and the high adaption of initial requirement of the product, it remains some minor disadvantages. For instance, the resolution was small so that it could not display small detail of the video effectively. Based on the result of the product, further research can be implemented to improve the limited resolution. Moreover, the specialized video transmitted standard can be used to make the interface between kits BeagleBone Black and DE0 Nano more excellent.
[2] [3]
[4]
[5]
[6]
[7]
Silence-MusicofNature. “Playing with the Beaglebone”[Online]. http://vidyutshastra.blogspot.com/2013/02/playing-with beaglebone.html, Febuary. 2, 2013. Derek Molloy. “boneDeviceTree” [Online]. https://github.com/derekmolloy/boneDeviceTree , July. 1, 2013. Shabaz. “BBB - Working with the PRU-ICSS/PRUSSv2” [Online]. http://www.element14.com/community/community/knode/single board_computers/next-gen_beaglebone/blog/201 3/05/22/bbb-working-with-the-pru-icssprussv2 , May. 22, 2013. Owen. “Understanding BBB PRU shared memory access “ [Online]. http://www.embedded-things.com/bbb/understanding-bbb-pru-sharedmemory-access/ , August. 27, 2013. Community BBB support. “AM335x PRU-ICSS Reference Guide” [Online]. https://github.com/beagleboard/am335x_pru_package/blob/master/am 335xPruReferenceGuide.pdf , June. 1, 2013. John Clark. “BeagleBone Black” [Online]. http://www.armhf.com/index.php/boards/beaglebone-black/#precise , April. 26, 2013. John Clark. “Expanding Linux Partitions: Part 2 of 2” [Online]. http://www.armhf.com/index.php/expanding-linux-partitions-part-2of-2/ , May. 11, 2013.
I/O Minimizing by Multiplexing Touch Feedback on Capacitive Sensor Tuan-Vu Ho Department of Electronics Faculty of Electrical and Electronics Engineering Ho Chi Minh City, Vietnam
[email protected]
Abstract — This paper presents the basic principle of capacitive sensing and an experimental sensor geometry for a capacitive keypad configuration. A new capacitive touch keypad configuration is also proposed in this paper with feedback LED multiplexed on the same pin of capacitive sensor in order to reduce microcontroller IOs. The measure result of this new configuration is compared with the traditional configuration to show the feasibility of this method.
into the area above the open capacitor, the electric field is interfered with causing the resulting capacitance to change. By continuously measure the capacitance of this sensor, the microcontroller could determine not only on/off button function but also the distance from sensor to the nearby object [1].
Keywords — Capaciti ve sensor, capaci tan ce measur ement
I.
I NTRODUCTION
Capacitive touch sensing technology has become increasingly popular with various applications in the recent years. One of the most low-cost and easy way to implement a capacitive touch device is using PCB-based capacitive sensing method which is integrated to some of the MSP430 microcontroller family. The capacitive sensing principle will be briefly mentioned in section 2 of this paper. Capacitive sensing is usually found in many HumanMachine Interface (HMI) applications. Additionally, feedback signal is an essential part in HMI system which is used to indicate user’s contact. Types of feedback include: haptic, sound and visual. Visual feedback by LEDs is the most preferable choice due to its simple implementation. The traditional approach was to connect the feedback LED on the separate I/O pin. Consequently, there are at least two times as many I/Os for the non-feedback capacitive sensors. Thus, in some multiplexing configurations, e.g. matrix-keypad or slider, the number of LED would exceed the available I/Os of the microcontroller. In a better configuration, multiplexing the LED feedback on the same pin of capacitive sensor is needed to meet the requirement without replacing by other microcontroller. Our work proposes a method for multiplexing the LED feedback on a capacitive touch sensor. An experimental device is also developed to illustrate this method in section 4. II.
CONSTRUCTION AND OPERATING PRINCIPLE
A. PCB-based capacitive sensing principle The PCB-based capacitive electrode is simply consist of a copper pad which form an open capacitor structure with surrounding elements as shown in Figure 1. The base capacitance of such an electrode is in range of ~10pF for a finger-size sensor. When a conductor, such as a finger, comes
Fig 1. Copper pad acts as an open capacitor with surrounding elements. The finger interfere the electric field of the capacitor with causing its capacitance to change. *This figure is quoted from [1]
B. Capacitive sensing with the MSP430 Value Line microcontroller The MSP430 Value Series provide an unique PinOsc GPIO feature which is specifically designed for capacitive touch sensing along with integrated 16-bit Timer0_A3 and Timer1_A3 modules for timing control of measurement process and scan interval to permit low-power and low-cost flexible capacitive touch solutions. The PinOsc GPIO internal structure of the MSP430 Value Series is shown in Figure 2 with the capacitive sensor acts as a variable capacitor at the input pin. It could be easily determined that the frequency of output signal from the PinOsc module is related to the input sensor capacitance. In particular, the output signal frequency is decreased as the input sensor capacitance increases. This frequency variation can be measured by means of a timer in capture mode to detect whether a finger has touched
the sensor. The hardware configurations and measuring methods for PinOsc GPIO is mentioned in details in [3].
Fig 4. Geometry of a button of capacitive keypad. This geometry enables more accuracy when the finger touch shifts a little bit horizontally or vertically.
D. Muxing the feedback LEDs on the keypad Fig 2. Internal structure of the PinOsc module of MSP430G2xxx. *This Figure is quoted from [2]
C. Multiplexing capacitive touch keypad configuration
In Figure 4, an simple method is proposed in order to multiplex the feedback LEDs with the keypad electrodes. The feedback LED is tied between two sensor of the button through a high speed switching diode (e.g. an 1N4148 diode) and a resistor.
Figure 3 shown the schematic and the actual layout of the multiplexing capacitive touch keypad. Each button of the keypad is a combination of two capacitive sensor placed very close together. Therefore, the detection of a valid key is not directly the change in capacitance of a single electrode but of two electrode at the same time. When multiplexing a key, the contact area for each electrode is reduce at least to half. Consequently, sensitivity is reduced in the same proportion [4]. While measuring an electrode, all other one should be tied to the source or mass to reduce cross-talk between each electrode.
Fig 5. Schematic of the capacitive keypad with feedback LED multiplexed with button sensors.
Fig 3. Schematic of the capacitive keypad without feedback LED.
In Figure 4, the r ecommended layout for multiplexing a key is shown. The preferable size is the size of a finger. This geometry is preferred because the area of the key is maximized. In other words, a small amount of area is lost when merging the two electrodes together. In addition, this layout accepts that if finger touch shifts a little bit horizontally or vertically the area of the key, the two electrodes that create the logical combination are affected in the same proportion [4].
At high frequency, the LED will act as short circuit due to its junction capacitance. Therefore, a high speed switching diode is connected in series with the LED to eliminate crosstalk between two sensors and reduce parasitic effect from the LEDs. Before each measurement, the LEDs must be reverse biased by means of connecting the opposite electrode to source or ground. After measurement process has completed, the feedback LED can be turned on if the corresponding button has been touched. The measurement sequence is mentioned with more details in Section 3. The layout strategy for this configuration is shown in Figure 6.
B. Processing algorithm After measurement sequence has been taken, the collected data is then processed to determine which button has been touched and adjust the base capacitance for each sensor. Figure 8 shows the processing algorithm flow chart for the capacitive keypad. The collected data is processed by an IIR low pass filter before compare with the base count. If the different between measure count and base count is greater than threshold, the sensor is considered as being touched. If the delta count below the threshold, it is considered that the variation is caused by the environment and the base capacitance of the sensor should be adjusted. After button states have all been determined, the corresponding feedback LED could be turned on to indicate a valid touch by the user. Fig 6. Layout strategy for keypad with multiplexing feedback LED (2 layer routing).
III.
MEASUREMENT STRATEGY
The measurement method for capacitive keypad with multiplexed feedback LED is a little bit more complicated than the traditional method. A. Measurement sequence The capacitive keypad measurement algorithm flowchart for detecting a valid touch is shown in Figure 7.
Fig 8. Processing algorithm flowchart
C. Base capacitance auto tracking algorithm
Fig 7. Measurement sequence flowchart
The base capacitance is the capacitance of a given sensor when untouched by the user. Voltage stability, PCB mechanics, insulator properties as well as ambient conditions such as temperature and proximity to other objects all play an important role in the base measurement of a PCB-based capacitive sensor. Without a dynamic ability of to track the variation in base capacitance, instability can result in false press detection of stuck key behavior.
V.
R ESULTS AND DISCUSSION
Table I shows the measure count of the capacitive keypad without feedback LED and with feedback LED. It is clearly seen that the affection of the feedback LED is quite small with the switching diode connected in series with the feedback LED. TABLE 1. R AW MEASURE COUNT OF A SENSOR IN DIFFERENT CONFIGURATIONS
Fig 9. Base capacitance tracking algorithm
The algorithm shown in Figure 9 proposed a simple method to maintain a dynamic base capacitance of each sensor. The algorithm must ensure to track the variation of base capacitance caused by the environment and the variation due to user’s touch. IV.
EXPERIMENTAL SETUP
Raw measure count No touch Touch Delta (base count) Keypad with no 1096 212 LED feedback Keypad with LED feedback 233 146 but no switching diode Keypad with feedback LED 1045 208 and switching diode VI. CONCLUSION AND FUTURE WORK
884
87
837
In this paper, a new method of multiplexing touch feedback on capacitive I/O is proposed which has shown a very promising results. The feedback LEDs just have a minor effect on the measure data and could be ignored. This new technique could enable more accurate and faster capacitive sensing and reduce product expenditure. In the future, capacitive touch will become preferable choice in many Human – Machine Interface applications. Therefore, it is necessary to develop not only a robust, optimal but also a low-cost, low-power design to adapt the requirement forthcoming. ACKNOWLEDGEMENTS This research is a part of the Scientific Research Club project. The authors gratefully financial support from M.Eng. Ho Thanh Phuong and acknowledge support from the Scientific Research Club for this project. R EFERENCES Fig 10. Hardware configurations used for testing with a 3x3 keypad
Figure 10 shows a universal remote that contents a 3x3 capacitive keypad with feedback LED multiplexing with capacitive sensors, a 6-elements wheel, an IR LED and an IR receiver. The feedback LED is soldered upside down to emit light through the PCB. The remote is capable of capturing the IR signal from other remotes and store the collected data into flash. Hence, this remote can be programed to control various device such as TV, DVD player, satellite receiver and projectors.
[1] Zack Albus, SLAA363A, Texas Instruments - PCB-Based Capacitive Touch Sensing With MSP430. 10/2007 [2] SLAA574, Texas Instruments – Ca pacitive Touch Sensing, MSP430™ Button Gate Time Optimization and Tuning Guide. 1/2013 [3] Texas Instruments – MSP430G2xxx User Guide. 1/2012 [4] Oscar Camacho and Eduardo Viramontes, AN3863 – Designing Touch Sensing Electrodes. 7/2011
Moving Object Detection in Traffic Scene Thanh-Hue Nguyen-Thi Department of Telecommunications Faculty of Electrical and Electronics Engineering HCM City University of Technology Ho Chi Minh City, Vietnam
[email protected]
Abstract — Detecting regions that correspond to moving objects such as people and vehicles in video is the first basic step of almost surveillance system since it provides a focus of attention and simplifies the processing on subsequent analysis steps. Due to dynamic changes in natural scenes such as sudden illumination and weather changes, repetitive motions that cause clutter (tree leaves moving in blowing wind), motion detection is a difficult problem to process reliably. Frequently used techniques for moving object detection are temporal differencing and optical flow. The most attractive advantage of these algorithms is that the algorithm does not need to learn the background model from hundreds of images and can handle quick image variations without prior knowledge about the object size and shape. The algorithm has high capability of anti-interference and preserves high accurate rate detection at the same time. It also demands less computation time than other methods for the real-time surveillance. The effectiveness of the proposed algorithm for motion detection is demonstrated in a simulation environment and the evaluation results are reported in this paper. Keywords — motion differencing.
I.
detection,
optical
flow,
temporal
I NTRODUCTION
In recent years, motion detection has attracted a great interest from computer vision researchers due to its promising applications in many areas, such as video surveillance [7], traffic monitoring or sign language recognition. Although the existing techniques have undeniable advantages, moving object detection in complex environments is still far from being completely solved. However, it is still in its early developmental stage and needs to improve its robustness when applied in a complex environment. Several techniques for moving object detection have been proposed in [8], among them the three representative approaches are background subtraction, temporal differencing and optical flow. The traditional background subtraction method subtracts the background model from the current image. It segments foreground objects accurately. It also detects foreground objects even if they are motionless. However, traditional background subtraction is susceptible to environmental changes, for example, in the cases of gradual or sudden illumination change. The result of background subtraction is always contaminated by a large number of erroneous foreground pixels. The major drawback of background subtraction is that it only works for static background, and hence background model update is required for dynamic background scenes [8]. Another approach is based on temporal
difference, which attempts to detect moving regions by making use of the using the difference of consecutive frames (two or three) in a video sequence. This method is highly adaptive to dynamic environments, but generally does a poor job of extracting the complete shapes from some certain moving objects types. Optical flow is a velocity field associated with image changes. Most approaches to estimate optical flow are related to brightness changes between two scenes. It can achieve success of motion detection in the presence of camera motion or background changing. It can detect the motion accurately without even knowing the background. In this paper, the temporal differencing method and optical flow method are mentioned due to their simplicity and efficiency. The temporal difference imaging helps to detect slow moving objects, resulting better object boundaries and speed up the algorithm. Optical flow gives object regions. We assume that the object with salient motion moves in an approximate consistent direction in a time period. The motion is calculated by differential techniques which compute velocity from spatiotemporal derivatives of image intensity or image filtered versions using low pass or band pass filters, by the Lucas-Kanade optical flow algorithm or Horn-Schunck optical flow algorithm. II.
TEMPORAL DIFFERENCING DETECTION METHOD
A. Using information of three co nsecutive frames: For this algorithm, the current frame is simply subtracted from the previous frame, and if the difference in pixel values for a given pixel is greater than the threshold, the pixel is considered part of the foreground. Subsequent images I (x, y, t-1), I (x, y, t) and I (x, y, t + 1) are subtracted and the difference image Idifference(x,y,t-1) is thresholded to get the region of changes. diff1 = I(x, y, t) - I (x, y,t-1) diff2 = I(x, y, t+1) - I (x, y,t) Idifference(x,y,t-1) = (diff1>Td) or (diff2>Td) The threshold Td can be derived from image statistics. B. Using information of two consecutive frames (I(x,y,t) and I(x,y,t+1)) and frame difference in the past In order to detect cases of slow motion or temporally stopped objects, a weighted coefficient with a fixed weight for
the new observation is used to compute the temporal difference image Idifference(x, y, t) as shown in following equations [6]:
Or
Idifference(x, y, t+1) = 255 if (Iaccum(x, y, t+1)>Td) Idifference(x, y, t+1) = 0, otherwise. Iaccum(x, y, t+1) = (1-Waccum) Iaccum(x, y, t) + Waccum| I(x,y,t+1)- I(x,y,t)| Td
3 mean(I accum (x, y, t
Set
1))
and
Where, Waccum - real number between 0 and 1 which describes the temporal range for difference images
We have:
Iaccum (x, y, t + 1) is initialized to an empty image. The temporal difference is a simple method for detecting moving objects in a static environment in which the adaptive threshold Td can restrain the noise very well. Frame differencing has very low computational cost so it can be done in real-time. Thus, it has a great adaption to dynamic background. A challenging task for frame differencing is to determine the value of the threshold. Different sequence requires a different threshold to classify the pixel as a foreground or background pixel. III.
Lucas and Kanade assumed that the flow (Vx, Vy) is unchanged in a small window size NxN and numbering the pixels as 1…p, a set of equations can be derived: Ix1Vx + Iy1Vy + It1 = 0 Ix2 Vx + Iy2 Vy + It2 = 0
OPTICAL FLOW DETECTION METHOD
…
Optical flow is a concept which is close to the motion of objects within a visual representation. The goal of optical flow estimation is to compute an approximation to the motion field from time-varying image intensity [6]. Estimating the optical flow is useful in pattern recognition, computer vision, and other image processing applications. In this section, two optical flow methods are introduced, Lucas-Kanade Method and HornSchunck Method.
Ixp Vx + Iyp Vy + Itp = 0
(1)
A. Lucas-Kanade Method The Lucas-Kanade algorithm assumed that intensity values of any given region do not change but merely shift from one position to another. I(x, y, t) = I(x+∆δ x, y+δ y, t+δt)
Fig 1. Optical flow vector corresponds to all window pixels.
∑ ∑∑ ∑∑ ∑ Where
Hence:
System of equations (1) can be solved by the Least Mean Square (LMS) method for estimating the optical flow vector.
The advantage for the method is its accuracy and robustness of detection in presence of noise. B.
Horn-Schunck Method
This algorithm is based on a differential technique computed by using a gradient constraint (brightness constancy) with a global smoothness to obtain an estimated velocity field [1][2]. There are two main processes for the implementation of
I(x,y,t) = I(x+u, y+v, t+1)
the HS algorithm. The first one is an estimation of partial derivatives, and the second one is a minimization of the sum of errors by an iterative process to present the final motion vector. Step 1. Estimati on of par ti al deri vatives Estimati on of classical par ti al deri vatives
This section presents the estimation process of the classical derivatives of image intensity or brightness from the image sequence [3]. The brightness of each pixel is constant along its motion trajectory in the image sequence. The relationship in continuous images sequence will be taken into account to estimate the original intensity for a gradient constraint. Let I(x,y,t) denote the gradient intensity (brightness) of point (x,y) in the images at time t. In each image sequence, Ix, Iy, and It are computed for each pixel: Ix = ¼ {Ix,y+1,t - Ix,y,t + Ix+1,y+1,t - Ix+1,y,t + Ix,y+1,t+1 Ix,y,t+1 + Ix+1,y+1,t+1 - Ix+1,y,t+1} Iy = ¼{Ix+1,y,t - Ix,y,t +Ix+1,y+1,t - Ix,y+1,t +Ix+1,y,t+1 Ix,y,t+1 + Ix+1,y+1,t+1 - Ix,y+1,t+1} It = ¼ {Ix,y,t+1 - Ix,y,t + Ix+1,y,t+1 - Ix+1,y,t + Ix,y+1,t+1 Ix,y+1,t + Ix+1,y+1,t+1 - Ix+1,y+1,t}
ε = u Ix + v Iy + It = 0 where u and v are the horizontal and vertical motion vectors of optical flow, respectively. We can find optical flow by using iteration to minimize ε. Iterative equations are presented as:
Where
̅ [ ] ̅ [ ̅ ] ̅ and
denote horizontal and vertical neighborhood
averages ( and ), which are initially set to zero and then the weighted average of the value at neighboring points based on the in Fig. 9. The smoothness weight (α) plays an important role where the brightness gradient is small, for which the suitable value should be determined.
Estimati on of parti al deri vatives on BF B k ernel
Barron et al. (1994) proposed a performance evaluation over many algorithms of optical flow and modification of some of the variant variables [1]. They use the kernel of mask coefficient (as Fig. 2) for gradient estimation which is expressed as:
{ } { } { } Fi g 2. The kernel coefficient ò BFB
Step 2. M in imi zation
In practice, the image intensity or brightness measurement may be corrupted by quantization or noise. According to the equation for the rate of change of image brightness [3]:
. Weighted average kernel at neighboring points. Fig 3 According to the characteristic of the HS algorithm, when applied with the BFB kernel it provides simplicity in the algorithm with reasonable performance and better quality. However, it has two major drawbacks [3], the value of the smoothing weight (α) cannot be defined exactly because the suitable value is varying upon different image sequences. Another problem is the suitable iteration times also cannot be defined for the best outcome, which impacts the processing time for the best motion vector at the output. IV.
EXPERIMENTAL R ESULTS
In this section, the proposed algorithms are tested using Matlab program with an avi format input video files. The detected moving regions are marked with red pixels. Although these two algorithms are quite simple, they lead to satisfactory results. The high accuracy moving object boundaries obtained from this approach can be combined with the other moving region extraction results in order to obtain good segmentations.
Fi g 4 . Difference in frame 20/21/22, Td=30 Fi g 7. Horn-Schunck ’s algorithm with 2 grayscale image
frames 100/101, alpha=25, iterative=95 V. CONCLUSIONS AND FUTURE WORK In this paper, we have introduced algorithms for moving object detection and some experimental results achieved by these algorithms. The results are suitable with the proposed theories. The proposed algorithm detects the foreground effectively. We realize that temporal differencing method gives object boundaries, meanwhile optical flow method gives object regions.
Fig 5 . Difference in frame 20/21, Waccum = 0.5, Td=15
Fig. 6 and 7 presents the results obtained by two optical flow algorithms (Lucas-Kanade and Horn-Schunck). One can notice that the detection accuracy of the Horn-Schunck algorithm is higher than the Lucas-Kanade algorithm. However, for the small objects, the Lucas-Kanade algorithm seems to be more effective.
Fi g 6. Lucas- Kanade’s algorithm with 2 grayscale image
frames 130/131
In the future work, we desire to achieve the improvement of moving object detection algorithm by researching new approaches which are integrated the advantage of temporal differencing method and optical flow method. Furthermore, we can also use some of the morphological processing methods to gain the better results. ACKNOWLEDGMENTS I’m grateful to the useful advice of Dr. Truong Cong Dung Nghi for helping me complete this work. R EFERENCES [1]
J. L. Barron, D. J. Fleet, and S. S. Beauchemin. “Performance of optical flow techniques”, International Journal of Computer Vision, 12(1):43– 77, 1994.
[2]
B.K.P. Horn and B.G. Schunck, “Determining optical flow”, Artificial Intelligence 17(1-3): pp.185-203, 1981.
[3]
D. Kesrarat and V. Patanavijit, “ Tutorial of Motion Estimation Based on Horn-Schunk Optical Flow Algorithm in MATLAB”, 15(1): 8 -16, Jul. 2011
[4]
D. J. Fleet and Y. Weiss, “Optical Flow Estimation”, Mathematical models for Computer Vision: The Handbook. N. Paragios, Y. Chen, and O. Faugeras (eds.), Springer, 2005.
[5]
Gottipati. Srinivas Babu, “Moving object detection using Matlab”, International Journal of Engineering Research & Technology, Vol. 1 Issue 6, Aug. 2012.
[6] N. Lu et al., “An Improved Motion Detection Method for Real-Time Surveillance”, IAENG International Journal of Computer Science, Feb. 2008. [7]
Y. Tian and A. Hampapur, “Robust Salient Motion Detection with Complex Background for Real-time Video Surveillance”, IEEE Computer Society Workshop on Motion and Video Computing, Breckenridge, Colorado, January 5 and 6, 2005.
[8]
Y. Dedeo˘glu, “Moving object detection, Tracking and classification for Smart video surveillance”, M.S. thesis, Aug. 2004.
[9]
Wikipedia, the free encyclopedia. 20 February 2007. Lucas Kanade method. Available: http://en.wikipedia.org/wiki/Lucas_Kanade_method
[10] Y. Shan and R.S. Wang, “Improved algorithms for motion detection and tracking”, Optical Engineering, vol. 45, n 6, June 20 06. [11] The MathWorks, Inc., “Image Processing Toolbox™ User‟s Guide”. Version 8.3, [Online]. http:// www.mathworks.com
[12] K. E. Appiah, “Smart Detector: An Intelligent Hardware based Video Surveillance”, M.S. thesis, Stockholm, Jun. 2004. [13] S. Seitz et al., “Motion and optical flow”, 2013. [14] Gonzalo Vaca-Castano, “Matlab Tutorial. Optical Flow”, 2013.
Object Surface Reconstruction Thanh-Hai Tran-Truong Department of Automatic Control Ho Chi Minh City University of Technology Ho Chi Minh City, Vietnam
[email protected] Abstract — Three-dimensional perception has gained more and more importance in robotics as well as other fields for the great improvement in effectiveness recently. In this paper we present a simple but efficient system to calculate the 3D data from the 2D images to construct the object’s surface as a point set. This point set is created by a method that combines several laser scans captured from a single camera. Our work employed the Moving Least Squares (MLS) derived from the Point Cloud Library (PCL) as the underlying representation of an object’s surface. The Greedy Projection and Poisson algorithms were also used to create a triangulation where the triangle’s edges respect the geometrical relations of the surface instead of the sampling density of the range scans. This application can be integrated to many fields such as hybrid rendering, physical parameters, laser scan registration and object recognition. Keywords — surface structure; surface texture; reconstruction algorithms
I.
I NTRODUCTION
Surface reconstruction is one of the most pivotal trends in computer vision and computer graphics. Our main work is to reconstruct a highly accurate model of the object from an unorganized set of points that best fits the model. As one of the most commonly accepted graphic representation with many supporting libraries, polygonal meshes are used to represent the object‟s three-dimensional model after scanning and reconstruction process. To assemble the laser scans into a 3D model, the problem of calculating the point set ha s to deal with the unknown geometrical parameters of the model and the interference that affects our measurements. The unavoidable presence of noise and outliers is the main challenge and once these factors are well handled, the reconstruction progress can produce great results. Many processes in computer graphics such as object identification, surface reconstruction, separation plane, 3D model comparison, require information about the direction of reference points in order to orient the surface in threedimensional space. The sensor scans the surface of 3D objects and extracts discrete points which contain an amount of noise. During the sampling process, information about the direction and curvature of the surface can lost for some extent. The process of estimating the orthogonal vector (normal estimation) will restore these values for each sampled point using its neighbors. There are two methods to estimate the orthogonal vector: the averaging method and optimization-based method. We also used an implicit technique to demonstrate the 3D surface to help reduce noise in the later process of calculating and estimating polygons. Implicit technique is a notion to construct the surface from mesh‟s n odes. A discrete surface is
defined by a finite number of parameters. We only consider here polygon meshes. Polygon meshes are composed of geometrical and topological connections. Clearly spoken, the geometry of a connection includes vertex coordinates, normal vectors while its topology is represented in various ways. The Poisson and Greedy projection algorithms are employed to approximate the mesh from points with normal information without the need for spatial partitioning heuristics and blending by using a hierarchy of locally supported basis functions. This reduces the problem to a sparse linear system which can be solved efficiently. After the triangular mesh is formed, the surface can be colored using the data of the object‟s images from camera. This coloring process is often inaccurate for the points on edge because of incorrect calibration and light interference around the object. There are many kinds of sensor that can produce the threedimensional data as Kinect, LIDAR, and stereo camera. That yields a solution to increase the accuracy of point cloud by fusing these data from different sensors. In our work, a single camera and laser scanner were used for scanning and capturing process to decrease environment‟s noise. The camera will capture a set of images and convert this into two-dimensional coordinates. The Point Cloud Library [1] (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing. It implements a set of algorithms designed to help manipulating three-dimensional data, in particular point clouds. The backbone for fast k-nearest neighbor search operations is provided by FLANN [2] (Fast Library for Approximate Nearest Neighbors). All the modules and algorithms in PCL pass data around using Boost shared pointers, thus avoiding the n eed to re-copy data that is already present in the system. Laser scanner is one of the simplest ways to reconstruct an object‟s three-dimensional surface with high accuracy and efficiency. II.
THE MATHEMATICS OF TRIANGULATION
This part describes the underlying mathematics to add a new dimension to the 2D images using the model‟s calibration parameter. The angle between the camera and laser beam is the main parameter for triangulation geometric model [3]. Pinhole model is a simple and popular geometric model for a camera. In this model, every 3D point determines a unique line passing through the center of projection. The intersection of a plane of light with the scanned object contains many curved segments. We assume that the locations of camera and laser are available by calibration process. Thus, the equation of projected planes and rays, as well as the equations of camera rays corresponding to the center points are created by the hardware parameter, which can be measured. Under this assumption, the equations
of projected planes and rays, as well as the equations of camera rays corresponding to reference points, are defined by parameters which can be measured. From these measurements, the location of reference points can be recovered by intersecting the planes or rays of light with the camera rays corresponding to the reference points.
Oz
P3DZ’1 P3D1
CAMERA
P3 D y ' i Cx P y '. L2 P3 D y 3 D P3 D y ' L2 L .P y P3 D x 1 3 D L2 P3 D z ' j H O ( L P x).( P3 D z ' L3 ) L3 if P3 D z ' L3 P3 D z 1 3 D L1 P x.( L P z ') if P3 D z ' L3 P3 D z 3 D 3 3 D P3 D z ' L 1 Equation (1) is used to convert the 2D image to 3D point cloud. From a two-dimensional coordinates of the laser‟s point , (1) calculates the 3D coordinates P3D(x,y,z) based on the identified parameter. Fig. 2 shows the triangulation geometry of equations (1). C x and H o are the coordinates of the point of the Ox and Oz axes. We can also combine the pinhole camera model to identify the projected points of the object on the image plane. Base on this model, denote , to be the projections of on the plane. Equation (1) will use these values of , and the other parameters such as the distance to the camera, the laser scan and reference point to calculate the coordinates of The object will be rotated for 1.8 degree after determining the coordinates of the points on the laser scan.
P3DZ1
ω ω'
L3 P3D2
P3DZ2 P3DZ’2
Ox
P3Dx1
P3Dx2
O
L1
Fig 2. Second orthogonal projection plane Oxz Geometryical triangulation parameters looking sideway. The parameters L3 is the distances between the camera and th e origin point on Oz axis. EXTRACT POINT’S
COORDINATES & COLOR VALUES IMAGE PROCESSING
IMAGES SEQUENCE
TRIANGULATION TRANSFORMATION
POINT CLOUD PRE-PROCESSING
GEOMETRIC MODEL
POINT CLOUD
O
P3Dy P3DY’
Oy
θ
L1
P3Dx
P3D
Fig 3.Conversion of a sequence of 2D images to 3D point cloud. The image processing algorithm uses the OpenCv APIs to extract the set of cordinates P2D(i,j) of the points on the laser scan. After completing this conversion, the point cloud will become input for pre-processing process.
LASER
CAMERA Ox
L2
Fig 1. First orthogonal projection plane Oxy. The parameters L1, L2 are the distances between the camera, the laser device and the origin point on Oxy plane.
Fig 4.Actual scanning chamber with the sensors positioned inside.
III.
POINT CLOUD PRE-PROCESSING
After conversion, the raw point cloud data will contain several thousands of noisy points. This brings difficulties for the reconstruction algorithm. First, point cloud preprocessing [3] method will remove noise, down-sample and reorganize the point cloud to reduce processing time and increase the efficiency of the reconstruction algorithm.
POINT CLOUD
MEDIAN
UNIFORM
BILATERAL
RANDOM
KD_TREE
MOVING LEAST SQUARE
RE-OGANIZATION
FILTERING
simulated plane, which contains k-neighbor points defined by Principal Component Analysis (PCA) [6]. An advantage of this approach is that it works on general, unorganized point clouds, so moving least squares (MLS) smoothing can be applied as a post-processing step on the global point cloud (concatenation of registered point clouds) in order to smooth out any remaining noise or registration imperfections. Moreover, MLS provides a set of completely tools to solve problems about smoothing and filling the holes caused by failed scanning process.
NORMAL SPACE COVARIANCE SAMPLING
NORMAL ESTIMATION
PRE-PROCESSING POINT CLOUD
POINT & NORMAL
Fig 5. Point cloud pre-processing Kd-tree technique is announced by Jon Bentley in 1975 [4]. This is a method to divide and reorganize the point cloud data in k-dimensional space. Using this technique the time for searching and processing the point cloud will be significantly reduced. For each point in the point cloud, a k-neighborhood is selected. This neighborhood is created by searching the nearest k-neighbors of the reference point within the sphere of a defined radius. A Kd-tree is similar to a decision tree except that a branch is created from the parent by the median value along the dimension with the highest variance. Each node in the tree is defined by a plane through one of the dimensions that partitions the set of poin ts into left/right (or up/down) sets, each contains half of the points of the parent node. These children are again partitioned into equal halves, using planes through a different dimension. The partitioning process will stop after log(n) levels, with each point being in its own leaf cell. Kd-trees are known to work well in low dimensions but seem to fail as the number of dimensions increase beyond three. The algorithm below describes the intuitional steps for arranging the point cloud into Kd-tree. Algorithm 1 Nearest Neighbors Searching
1/ Locate the location where the point would be located if it were added to the tree :Starting with the root node, the algorithm moves down the tree recursively, taking decisions to follow left or right node depending on whether the point is less than or greater than the current node in the split dimension. 2/ Once the algorithm reaches a leaf node, it saves that node point. As the tree is traversed the distance between the point and the current node should be recorded. 3/ The algorithm goes back up the tree evaluating each branch of the tree that could have points within the current minimum distance. 4/ Search completes when the algorithm reaches the root node.
Fig 6. The result after using MLS filter on the point cloud. Optimization-based method estimates the normal vector by optimizing the min value in (2) using PlanePCA [7] technique. In (2), J is the optimal value function, pi is the reference point and n i is the normal vector. PlanePCA will minimize the deviation of the points around an estimated axis and find out the axis with the smallest deviation. In computational terms the principal components are found by calculating the eigenvectors and eigenvalues of the covariance matrix. This process is equivalent to fin ding the axis system in which the co-variance matrix is diagonal. The eigenvector with the largest eigenvalue is the direction of greatest variation, the one with the second largest eigenvalue is the (orthogonal) direction with the next highest variation and so on. Each Eigen value [8] corresponds to an Eigen vector. For this project the maximum Eigen value is being used to find its corresponding Eigen vector. This is a column vector which stores the x, y, z coordinates of the normal position.
min J(pi ,Qi ,n i ) IV.
Moving Least Square (MLS) [5] filter is a method to remove noise and smooth out the point cloud based on estimating and comparing the normal vectors. These filters will access each point in three-dimensional data and create a
ni
SURFACE R ECONSTRUCTION PROCESS
Greedy Projection algorithm [9] is based on surface growing rules by maintaining a list of points from which the mesh can be grown and extend until all possible points are
connected. The algorithm‟s input is the point cloud with the sets of normal vectors. Algorithm 2 Greedy Projection Algorithm
1/ Nearest neighbor search: For each point „p‟ in the point cloud, a k-neighborhood is selected. This neighborhood is created by searching the reference point‟s nearest k -neighbors within a sphere of radius r. 2/ Neighborhood projection using tangent planes: The neighborhood is projected on a plane that is approximately tangential to the surface formed by the neighborhood and ordered around p. 3/ Pruning: The points are pruned by visibility and distance criterion, and connected to p and to consecutive points by edges, forming triangles that have a maximum angl e criterion and an optional minimum angle criterion.
Fig 8. Object‟s surface using Poisson algorithm V.
CONCLUSIONS
In this paper we have presented a productive method for object surface reconstruction using a single camera and laser scanner. We have successfully applied the mathematical triangulation model to convert 2D data to 3D and reconstruct the object‟s surface. As low-cost camera and laser enter the market, we expect students and hobbyists to begin incorporating them into their own 3D scanning systems. Such laser-camera systems have already received a great deal of attention in recent academic publications. Finally, some ideas that will guide our future work: We increase the laser‟s power and the camera‟s speed [11] to improve accuracy of the point cloud. NOISE
Fig 7. Object‟s surface using Greedy Projection algorithm Marching Cube algorithm [10] is a popular algorithm in the field of surface reconstruction. This algorithm assumes that each point is a vertex of the cube and creates polygons to connect points in each cube. Each edge and vertex of the cube is weighted to create a cube‟s table. After determining the edges or polygon vertex that can intersect, the algorithm connects cubes and forms the surface of the primitive objects. Poisson algorithm creates a stable surface for object using implicit functions to approximate the surface and split them into smaller surface using Marching Cube algorithm.
Fig 9. Removing noise using MLS algorithm.
ACKNOWLEDGMENT We would like to send many thanks to the Club for Science Research of Faculty of Electrical and Electronics Engineering for their tremendous contribution and support both morally and financially towards the completion of this paper. R EFERENCES [1]
Alexandru, E. Ichim, “PCL Toyota Code Sprint 2.0”, 2013.
[2]
J. Wang, “Suface Reconstruction from Imperfect Point Models”, 2007.
[3]
Alexandru, E. Ichim, “RGB-D Handheld Mapping and Modeling”, 2013.
[4] N.B.F. Fernandez, “3D Registration for Verification of Humanoid Justin‟s Upper Body Kinectics”, 2012.
Fig 10. Above : The result of Greedy Projection algorithm. Below: The result of Poisson algorithm. The Poisson algorithm has a better result than Greedy Projection algorithm but take a lot of time: approximate 25 minute with 80000 points.
[5]
Y. Li, “Low Cost 3D Scanner Background segmentation and Visual hull obtaining”, 2012.
[6]
F.S.I. Paniagua, “Object Recognition using Kinect”, 2011.
[7]
D. Lanman, G. Taubin, “Build Your Own 3D Scanner 3D Photography for Beginners”, 2009, pp.65-68.
[8]
A. Wetzler, “Low Cost 3D Laser Scanning Unit with Application to Face Recognition”, 2005.
[9]
J. Hyvarinen, “Surface Reconstruction of Point Clouds Captured with Microsoft Kinect”, 2012.
[10] M. Bertshe, “Recurring Recognition”, 2012.
Adaptive
Segmentation
and
Object
[11] F. Engelmann, “FabScan Affordable 3D Laser Scanning of Physical Object”, 2011.
ORAL SESSION C
Modelling and PID Control BLDC Motor Quang-Vu Nong
Anh-Quan Nguyen
Department of Power Delivery Faculty of Electrical and Electronics Engineering Ho Chi Minh city University of Technology
[email protected]
Department of Power Delivery Faculty of Electrical and Electronics Engineering Ho Chi Minh city University of Technology
[email protected]
Abstract — This paper presents a model of a three-phase starconnected brushless direct current (BLDC) motor and studies on construction and operation of BLDC in different control strategies. The dynamic performance and characteristics of the BLDC motor such as speed, torque, current and input voltage were observed and analyzed using a model developed on MATLAB. The simulations show that all control strategies can be developed to archive higher efficiency in BLDC operation. Keywords — PI D controll er, BL DC motor drives, modell in g.
I.
I NTRODUCTION
Brushless Direct Current (BLDC) motor is one of the motor types that are turning rapidly popular. BLDC motors are used in a wide range of applications due to its advantageous features. Some typical ones can be listed as: better speed with torque characteristics, high dynamic response, high efficiency, long operating life, noiseless operation, high-speed ranges, etc. More detailed features can be found in [1]. In addition, the ratio of torque over to the size of the motor is also higher, making it useful in applications where space and weight are critical factors. To replace the function of commutators and brushes, the BLDC motor requires an inverter and a position sensor that detects rotor position for proper commutation of current. The rotation of BLDC is based on the feedback acquired from hall sensors. BLDC motor usually uses three hall sensors for determining the commutation sequence. In BLDC motor, the power loss from can be easily transferred through the frame or cooling system commonly used in large machines. Therefore, BLDC motors have many advantages over DC motors and inductor motors. The common controllers are PI (Proportional and integral) controllers because they are facile and easy to understand. PI controllers are conventionally used for speed control while P controllers are for current control in order to achieve a high performance driver. Fuzzy logic can be considered a solution to control the BLDC motor. It has been reported that fuzzy logic controllers are more robust to changes of the plant parameter in comparison to classical PI controllers. Fuzzy logic controller also has better noise rejection capabilities. In this paper, PID is used for speed control of BLDC motor. The paper organized as follows: Section 2 explains the construction and operation principle of BLDC motor. Section 3 elaborates the mathematical modeling. Section 4 presents
simulation model. The simulation results are presented in detail in Section 5 and Section 6 will conclude the paper. II.
CONSTRUCTION AND OPERATING PRINCIBLE
BLDC motor is a synchronous motor. It is constructed with permanent magnets in rotor and winding coils in stator. The magnetic fields generated by stator and magnetic fields generated by rotor have the same frequency. So BLDC motors do not experience “slip” that exists in inductor motors. Stator Teeth Winding
Permanent Magnets
Slotted
Laminated Steel
Slotless
Figure 1. Slotted and Slotless motor.
A. Stator
Figure-2. The Stator of a BLDC motor
Similar to inductor motors, the BLDC motor stator is a combination of laminated steel stacked up to h old the windings. Steel laminations in the stator can be slotted or slotless as shown in Figure-1. Windings in stator can be arranged in two fashions; for example star pattern or delta pattern. The major difference between the two patterns is that the star pattern gives high torque at low speed and the delta pattern gives low torque
at low speed. This is because in the delta configuration, half of the voltage is applied across the winding that is n ot driven, thus increasing losses. Each of these windings is distributed over the stator periphery to create an even numbers of poles as shown in Figure-2 [1]. B. Hall sensor The commutation of BLDC motor is controlled electronically. To rotate the BLDC motor, the stator windings should be energized in sequence. It is important to know the rotor position in order to infer which winding need to be energized. Rotor position is sensed using the Hall sensor embedded into the stator. Most BLDC motors have three Hall sensors embedded into the stator on the non-driving end of the motor as shown in Figure-3. Whenever the rotor magnets go near the Hall sensors, they give a high or low electric signal. Based on the combination of these hall sensors’ signal, we can determine the sequence of commutation.
The Pull Width Modulation techniques are used to pull switches ON and OFF. To change the speed, the signal should be Pull Width Modulated (PWM) in higher frequency than the motor frequency. When the duty cycle of PWM is cut down within the sequences, voltage supplying to the stator will be reduced, and it brings down the speed. If the DC voltage supply is higher than motor rated voltage, the motor can be controlled by limiting the duty cycle of PWM. So we can control with different voltages and suitable for the average voltage output by the controller. The TABLE I show how we control the voltage and current through winding coils. TABLE I. Switching interval 0 – 60 60 – 120 120 – 180 180 – 240 240 – 300 300 – 360
III.
Figure-3. Rotor and Hall sensor
Seq. number 0 1 2 3 4 5
COMMUTATION SEQUENCE
Sensor position H1
H2
H3
1 1 0 0 0 1
0 1 1 1 0 0
0 0 0 1 1 1
Switch Phase Current A B C ON Q1 – Q4 + off Q1 – Q6 + off Q3 – Q6 off + Q3 – Q2 + off Q5 – Q2 off + Q5 – Q4 off +
MATHEMATICAL MODELLING OF BLDC MOTOR
The BLDC motors simulation can be involved in similar as a three-phase synchronous machine. But in BLDC motor, the magnets building on the rotor may make some different dynamic characteristics. As normally, three-phase motors structure of BLDC motor is powered by a three-phase voltage source as shown in Figure-5. The powered can be sinusoidal, square wave or the other can be applied as long as the peak voltages is not reach the maximum voltage limit of the motor.[2]
C. Operating theory In each revolution, one of the windings will be energized into positive state, whereas the second winding is in negative and the third is non-energized condition. Torque is produce because of an interaction between the magnetic fields given by the stator winding coils and the permanent magnets. The peak torque appears when two fields are perpendicular to each other and then fall off when the fields move together. During shifting position of magnetic fields, the rotor will catch up with the stator field and keep the motor rotating. D. Commutation sequence In each of 60 electrical degrees rotation, the Hall sensors change the state and it takes six step to complete the rotation.[8] In synchronous, the phase current switching will be changed in every 60 electrical degrees. However, a mechanical revolution of rotor may not be corresponded by one electrical cycle. The number of electrical cycles to be duplicated to complete a mechanical rotation depends on the rotor pole pairs. Following the following formula:
( )
A 3-phase inverter is used to control the BLDC motor. There are six switches controlled by the Hall sensors signal.
Figure-5. Motor circuit model
The voltage equation of the BLDC motor can be seen as the following:
( ) () ( ) () ( ) ()
We will convert the previous equations in to the matrix form. The voltage equation will become:
[ ][ ][] [ ][] []
With the intention of simplifying the equations the full assumptions are made: Magnetic circuit saturation is neglected.
Stator resistance, self and mutual inductance off all phase are equal to the constant.
The current losses are eliminated.
All semiconductor switches are ideal.
Figure-6. Typical waveforms of back-EMF and the corresponding stator currents.[8]
The voltage equation becomes:
[ ][ ][ ] ][ ][ [ ] [] [ ] [] []
The equation is rearranged:
IV.
SIMULATION MODEL
The complete block diagram of speed control of three phrase BLDC motor is show in Figure-7 below. To control BLDC motor, we will use two control loops. The inner loop will synchronize the inverter gates’ signals with the electromotive forces. The outer loop controls the motor’s speed by varying the DC bus voltage. Speed ref +
PID CONTROLLER
3-PHASE INVERTER
BLDC MOTOR
d e e p S
PWM SIGNALS
The electromagnetic torque is given as:
HALL SENSOR
The equation of motion is given as :
POSITION SENSOR
Figure-7. The complete block diagram of speed control
Where,
L - amateur self-inductance [H]. R - amateur resistance [Ω]. Va, V b, Vc - terminal phase voltage [V]. ia, i b, ic - motor input current [A]. ea, e b, ec - motor back-EMF [V]. Te - total torque output [Nm]. Figure-8. Simulation of decoder block
Tl - torque load [Nm]. 2
J - inertia of rotor and coupled shaft [kgm ]. B – friction constant [Nms.rad-1]. The typical waveforms are shown in Figure-6.
Driving circuitry consist of three phrases power converters, which initiate six power transistors energizing two BLDC motor phrases concurrently. The rotor position determining the switching sequence of the MOSFET transistors is detected by three Hall sensors mounted on the stator. The Decoder block will generates signal vector of back EMF by using Hall sensors information and the r eference current,.
A. Decoder block The basic idea of running motor in opposite direction is giving opposite current. Based on that, we have TABLE I calculating back EMF for Clockwise of motion and the gate logic to transform electromagnetic forces to the six signal on the gates is given TABLE I. The simulation is shown in Figure-8. TABLE II. Hall sensor A 0 0 0 0 1 1 1 1
Hall sensor B 0 0 1 1 0 0 1 1
DECODER BLOCK -CLOCKWISE ROTATION
Hall sensor C 0 1 0 1 0 1 0 1
EMF A
EMF B
EMF C
0 0 -1 -1 1 1 0 0
0 -1 1 0 0 -1 1 0
0 1 0 1 -1 0 -1 0
B. Gate block The Gate block transforms electromagnetic forces from the Decoder into on-off signals, which utilizes six power transistors of the converter given in TABLE II. The simulation is shown in Figure-9. TABLE III. EMF A 0 0 -1 -1 1 1 0 0
EMF B 0 -1 1 0 0 -1 1 0
EMF C 0 1 0 1 -1 0 -1 0
TRUE TABLE OF GATE BLOCK
Q1
Q2
Q3
Q4
Q5
Q6
0 0 0 0 1 1 0 0
0 0 1 1 0 0 0 0
0 0 1 0 0 0 1 0
0 1 0 0 0 1 0 0
0 1 0 1 0 0 0 0
0 0 0 0 1 0 1 0
Figure-10. PID controller
A PID controller is simple three-term controller. The letter P, I and D stand for P- Proportional, I- Integral, D- Derivative. The transfer function of the most basic form of PID controller is:
Where K P = Proportional gain, K I = Integral gain and K D = Derivative gain. The control u from the controller to the plant is equal to the Proportional gain (K P) times the magnitude of the error pulse the Integral gain (K i) times the integral of the error plus the Derivative gain (K d) times the derivative of the error.
∫ V.
SIMULATION R ESULTS
Fig. 11 as shown performance of Conventional PID Controller of BLDC Motor on Reference speed of 1500rpm with no load condition of (a)Speed; (b)Torque; (c)Stator Back EMF and (d) Stator Current. Fig. 12 as shown performance of Conventional PID Controller of BLDC Motor on Reference speed of 1500 rpm with 2 N.m load condition of (a)Speed; (b)Torque; (c)Stator Back EMF and (d) Stator Current. 1600
1200
m p r n i 800 d e e p S 400
Figure-9. Simulation of Gate block
C. Speed controller block Consider the characteristics parameters – proportional (P), integral (I), and derivative (D) controls, as applied to the diagram below in Figure-10.
0 0
0.04
0.08
0.12
0.16
Time in Sec
Figure 11a Speed of none load condition.
0.2
1600 16
1200 12 m . N n i e 8 u q r o T
m p r n 800 i d e e p S
4
400
0
0 0
0.04
0.08
0.12
0.16
0.2
0
Time in Sec
0.04
0.08
0.12
0.16
0.2
Time in Sec
Figure-11b Torque of none load condition.
Figure-12a Speed of 1500rpm and 2N.m load condition. 20
100
15
e 50 g a t l o v n i 0 F M E k c a B -50
m . N n i e 10 u q r o T 5
-100 0
0.04
0.08
0.12
0.16
0.2
0
Time in Sec
0
0.04
0.08
0.12
0.16
0.2
Time in Sec
Figure-11c Back EMF of none load condition.
Figure-12b Torque of 1500rpm and 2N.m load condition.
12
100
10 8
e g a t l o V n i
e r e 6 p m A n 4 i t n e r r 2 u C
50
0 F M E k c a B -50
0 -2
-100
-4 0
0.04
0.08
0.12
0.16
Time in Sec
Figure-11d Stator Current of none load condition.
0.2
0
0.04
0.08
0.12
0.16
0.2
Time in Sec
Figure-12c Stator Back EMF of 1500rpm and 2N.m load condition.
ACKNOWLEDGMENT
15
This research was finished at Power Electronics Lab. We would like to express our very great appreciation to Lecturers of Power Delivery Department for their valuable and constructive suggestions during the planning and development of this research work.
10 e r e p m A n 5 i t n e r r u C
R EFERENCES
0
[1]
Padmaraja Yedamale, AN885, Microchip - Brushless DC (BLDC) Motor Fundamentals. 07/2003.
[2]
C. Umayal, B. Janani and S. Rama Reddy, DIGITAL IMPLEMENTATION OF PFC HALF BRIDGE CONVERTER FED PMBLDC MOTOR USING MICROCONTROLLER. VOL. 7, NO. 2, FEBRUARY 2012 , ISSN 1819-6608.
[3]
Tan Chee Siong, Baharuddin Ismail, Siti Fatimah Siraj, Mohd Fayzul Mohammed. Fuzzy Logic Controller for BLDC Permanent Magnet Motor Drives. IJECS-IJENS Vol: 11 No: 02. 04/2011.
[4]
A. Purna Chandra Rao, Y. P. Obulesh and Ch. Sai Babu, Mathematical modeling of bldc motor with closed loop speed control using PID controller under various loading conditions. ISSN 1819-6608. VOL. 7, NO. 10, OCTOBER 2012.
[5]
G.Prasad, N.Sree Ramya, P.V.N.Prasad, G.Tulasi Ram Das. Modelling and Simulation Analysis of the Brushless DC Motor by using MATLAB. (IJITEE) ISSN: 2278-3075, Volume-1, Issue-5, October 2012.
[6]
Bilal Akin and Manish Bhardwaj, Trapezoidal Control of BLDC Motors Using Hall Effect Sensors. Texas Intruments, Application Report SPRABQ6 – July 2013.
[7]
vinod Kr Singh Patel, A.K.Pandey, Modeling and Simulation of Brushless DC Motor Using PWM Control Technique. (IJERA) ISSN: 2248-9622. Vol. 3, Issue 3, May-Jun 2013.
[8]
Stefán Baldursson, BLDC Motor Modelling and Control – A Matlab®/Simulink ®Implementatio
-5 0
0.04
0.08
0.12
0.16
0.2
Time in Sec
Figure-12d Stator Current of 1500rpm and 2N.m load condition.
VI.
CONCLUSION
In this paper, a simulink model of BLDC is developed. The main aim is building a simple, accurate and easy to modify one. The torque and speed were tested under vary condition pointed out this simulink is suitable for studying BLDC motor operation. The BLDC motor is a good choice for many application due to high efficiency, high power density and wide ranges of speed compare to other motor types. In the future work, we desire to archive a reduction of Percent of Overshoot (POT) affected the torque. Furthermore, we continue to study the fuzzy logic controller to reduce the noise better than PID controller.
Research on Application of Single Wire Earth Return Distribution Systems in Vietnam Duc-Toan Nguyen
Huu-Thanh Nguyen
Department of Power Delivery Faculty of Electrical Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh, Vietnam
[email protected]
Department of Power Delivery Faculty of Electrical Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh, Vietnam
[email protected]
Abstract — Single Wire Earth Return (SWER) system is a low cost power distribution method that finds its use internationally in rural or sparsely populated areas. It is a key technology for the extension of grid systems. In Australia, there are many SWER systems covering vast areas. A single SWER system may typically supply a power of 100kW to several dozen customers and may extend for more than 300km. The cost benefits of SWER systems have also been utilized by other countries; for example, New Zealand, South Africa and Brazil have applied this technology to extend rural electricity supply. Currently in Vietnam there is yet no practical application or proposal for SWER system. While the national gridline could not meet the general demand of the whole population, there still exist many areas that cannot provide primary electrical light to household. This model promises many benefits thus further researches and implementations for SWER should be conducted in the future. Keywords — SWER (Sin gle Wir e Ear th Return ), Setting step of Voltage, Shunt Reactor M odell ing ;
I.
phase distribution feeders. Consumers are connected by a single phase transformer with two single phase outputs in a 220V-0-220V center tapped arrangement. In earlier Central Queensland systems a consumer transformer was typically 10kVA but this has now increased to 25kVA for a standard connection. SWER is best suited to rural electrification where the load is over 10 km from the existing grid with a maximum load of about 380 kVA or where the population density is low. Figure 1 shows a typical single phase installation. In SWER power distribution networks, electricity distribution method uses only one conductor with the return path through earth. The earth itself forms the current return path of the single phase system leading to significant cost savings on conductors, poles and pole-top hardware compared to conventional systems. However, challenges exist in SWER with regard to earthing and safety as well as the dependence on earth conductivity to supply consumer loads.
I NTRODUCTION
There are a large number of Vietnamese deprived of many advantages from electric energy. According to an investigation of World Bank (WB), there are more than one million households lacking electrical supply. Mountainous provinces in the North are able to supply only 20-30% of the citizens’ demand in each province. Many solutions have been proposed but achievement is still quite limited. The reason is that not many areas have the preferential natural conditions to be able to construct the machines converting natural energy into electricity. And the main barrier to rural electrification is the extremely high cost of grid connections particularly using conventional standards, with high grid expansion cost from 1530 000 USD/household, leading to a total cost of about 20 billion USD. The World Bank has encouraged the expansion of simple systems for rural electrification to reduce the cost of the grid extension. In 1920, Lloyd Mandeno introduced Single Wire Earth Return (SWER) distribution systems in New Zealand. Later in 1947, he published a paper proposing SWER as economic alternative to the standard three-phase distribution systems for rural areas. SWER distribution lines are used extensively in remote parts of Queensland and other states of Australia, as an economic means to deliver electrical energy to small customer loads, scattered sparsely over vast areas. These SWER systems are normally supplied from very long three-
Fig. 1 A single wire earth return consumer connection transformer
II.
ABILITY TO APPLY THE SWER MODEL IN VIETNAM
With the situation mentioned above, it is undoubted that a new model should be applied in the regions with low population density to ensure n ot only th e quality of electricity but also the economic circumstance of the people. SWER is a suitable model to meet the above requirements for the following reasons.
≅
Z eq(a) Z aa − 2. Z ag + Z gg
A. The advantages In addition, the advantages of this model are: The SWER installation costs are typically about one third and one half those for conventional three- and single phase systems respectively. The use of light-weight, high tensile conductors as well as the reduced weight of stringing one conductor allow for longer pole spans. Therefore, SWER systems often require approximately 50% fewer poles (for normal aluminum conductors), no cross-arms, narrower easements and lighter poles, resulting in a marked reduction in costs.
Electricity quality is ensured to be safe for th e people
Power is suitable to transmit electricity to household with average receipts, not too high to fit with the economy of
-4
Z aa = ra + j.4Π .10 .f .ln( 2ha /GMR) [ Ω/ km] 2
Zgg = π * 10^-4f – j0.0386*8π.10
-4
√
-4
+ j4π 10 .f.ln(
)[ Ω / km]
Z ag = -0,5j4 π .10-4.f .ln( Where:
)[ Ω / km]
Z aa is the self-impedance of the phase conductor in Ω / km. Ra resistance of the phase conductor in Ω / km. f is the system frequency in Hz. ha is the height of the cable phase aem over ground in km.
the residential areas.
GMR is the geometric mean radius of the phase conductor in miles.
B. The disadvantages
Z gg is the self-impedance of the earth conductor in Ω / km.
The disadvantages of this model are: The gridline requires the higher properties than the old one for maintenance
technological
It is hard to detect the return grounded line related problems.
Z ag is the mutual impedance between phase conductor a and the
land in Ω / km.
ρ is the resistivity of the soil expressed in Ω.m
SWER model has the return path through earth with the current which might be dangerous to the r esidential people if it does not be grounded in the proper and safe technique.
Many terrains in Vietnam are not suitable to develop the SWER model such as the Western of Vietnam (Tra Vinh province) since people live in the sand dune far away from the mainland. SWER model best applies only within the power of 380k VA. It would be a long-term if the regions with low population density have the investment projects, plants or the manufactories operating in the future. In general aspect, the application of this model in Viet Nam is totally possible.
III.
Fi . 2 Phase conductors and a dumm network
IV.
APPLYING SWER MODEL I N R EALITY
MODELING OF NETWORK WITH SWER
Considering Figure 2, the voltage per unit length for both overhead line and ground return can be formulated as follows:
Va – Va’ = Zaa.Ia + Zag.I g Vg – Vg’ = Zag.Ia + Zgg.I g Since Ig = -Ia, we have: Va – Va’ = Zaa.Ia – Zag.Ia Vg – Vg’ = Zag.Ia - Zgg.I a V a = ( Z aa − 2. Z ag + Z gg ). I a = Z eq(a). I a
Fig. 3 Sub stitution diagram
We choose aluminum wire with = = 8.66 (A) For follows:
=2
= 17.2 (A), we can choose
=
Practically, we choose:
= 17.2 (
as
= 16.77 (
The model wire data with a 1 0km length
C=
=
=-35.4 + j118.4 (dvtd)
=
=35.4 - j118.4 (dvtd)
=
= = = =
= =
(Ω)
Zaa = Za + Zg – 2Zag = 17.39 + j8.98(Ω)
=-35.4 + j118.4 (dvtd)
=
Zg = 0.49 + j3,64 (Ω)
=
=
Za = 16.9 + j5.35(Ω)
Zag = = j5,1.
̇ ̇ ̇ =68.89 - j318.73 (dvtd)
and
= = = =
=
= =
= =0 = =0
=
= =
=0
=0
=0 = =0 = =0
For the No-Load condition:
=
=
= 0
The Result of Voltage from nodes 1 to 7 of network
= 0.927
B = 2π.f.C = 2,9.
Transformer: 12.7/0.22 kv , 100kVA
Un = 5% , ∆Pcu = 2.3 (kW) R= = 37 (Ω) X=
.10 = 80.6 (Ω) Ω
BASE : S base = 100 kVA , U base = 12,7 kV
Z base =
= 1613 ( )
Y base =
=
= 0,00062
Y11 Y12 Y13 Y14 Y15 Y16 Y17 Y21 Y22 Y23 Y24 Y25 Y26 Y27 Y31 Y32 Y33 Y34 Y35 Y36 Y37 Y41 Y42 Y43 Y44 Y45 Y46 Y47 Y51 Y52 Y53 Y54 Y55 Y56 Y57 Y61 Y62 Y63 Y64 Y65 Y66 Y67 Y71 Y72 Y73 Y74 Y75 Y76 Y17
Y bus =
̇ ̇ ̇
For the Full – Load condition, we have: = = = 0.8 + j0.6 (dvtd) The Result of Voltage from nodes 1 to 7 of network
̇ ̇ ̇
For the Normal – Load condition: = 0; = = 0.8 + j0.6 The Result of Voltage from nodes 1 to 7 of network:
The values of elements in the above matrix are:
=
=
V.
VOLTAGE IMPROVEMENTS FOR SWER SYSTEMS BY SETTING STEP OF VOLTAGE A ND SHUNT R EATOR MODELLING A. Setting step of Voltage
=
=
=
=
=
=-33.44 + j200.61 (dvtd)
=
=
=
=-35.4 + j118.4 (dvtd) =-35.4 + j118.4 (dvtd) =35.4 - j118.4 (dvtd)
= -33.44 + j200.61 (dvtd)
Fig. 4 Substitution diagram of transformer
The Result of Voltage from nodes 1 to 7 in Full – Load mode with Step Voltage = 3%
Whilst this is not how they are operated in the real system, this enables the study of voltage regulation with and without the reactors in service.
The Result of Voltage from nodes 1 to 7 in No – Load mode with Step Voltage = 10%
Fig. 5 Reactor model and control panel
The Result of Voltage from nodes 1 to 7 in No – Load
mode without “Shunt Reactor modelling” In reality, for example a SWER line to a village or collection of villages with up to 400 households. The village(s) should be more than 10 km or so from the existing grid (assuming each household has a maximum demand of about 500 Watts), total capacity is about 200 kVA, Voltage drop in transformers and line is very high. “Setting step of
voltage” can be used to improve voltage in Load.
̇ ̇ ̇
In that case, if = = = 2 + j1.5 (dvtd). The Result of Voltage from nodes 1 to 7 of network in Full – Load without “Setting step of Voltage”:
The voltage of nodes 2 to 7 over Allowable Voltage 5%, so we need use “Shunt reactor modelling” to decrease Voltage. The Result of Voltage from nodes 1 to 7 in No – Load mode with “Shunt Reactor modelling”. It can be seen that the voltage of nodes 2 to 7 has reduced
The Result of Voltage from nodes 1 to 7 of network in Full – Load with “Setting step of Voltage”:
ACKNOWLEDGEMENTS We would like to express our grateful acknowledgement for the instruction from Dr. Nguyễn Văn Liêm in this project. R EFERENCES : The voltage of nodes 2 to 7 was Improvement. Remark
B. Shunt reactor modelling The fixed shunt reactors have been initially modeled using a lumped inductance and a resistance to give a Q factor of 50 at 50 Hz. The reactors have been connected to the SWER feeder using a single phase circuit breaker component as shown in Figure 3. The circuit breakers are all controlled from a Matlab to enable the r eactors to be switched on or off.
[1] Allen R. Inversin, “Reducing the cost of grid extension for rural electrification”, NRECA International, Ltd., World Bank Energy Sector Management Assistance Program, ESMAP, February 2000. Available: http://rru.worldbank.org/Documents/PapersLinks/1072.pdf [2] High Voltage Earth Return for Rural Areas, Fourth Edition, Electricity Authority of New South Wales, June 1978. [3] N.Chapman, “Australia’s rural customers benefit from single wire earth return systems”, Trans. Dist ., pp56-61, Apr 2001 [4] IEEE LATIN AMERICA TRANSACTIONS, VOL. 9, NO. 3, JUNE 2011- J. N. Souza, L. L. Diniz, O. R. Saavedra, Member, IEEE and J. E. Pessanha (Generalized Modeling of Three-phase Overhead Distribution Networks for Steady State Analysis)
Power Quality Analysis for Distribution Systems in Ho Chi Minh City Minh-Khanh Lam, Dinh-Truc Pham , Huu-Phuc Nguyen Faculty of Electrical and Electronics Engineering Ho Chi Minh city University of Technology
[email protected] Abstract — This paper focuses on the power quality analysis
for the distribution systems of District 10 and 11 in Ho Chi Minh City. The paper uses PSS/ADEPT software to analyze the networks and then calculate the voltage sag. Comparison between the short-circuit results obtained with PSS/ADEPT and the practical results are also carried out in the paper, the differences are then discussed.
I.
I NTR ODUCT ION
Both electric companies and electric end users are becoming increasingly concerned about the quality of electric power. The term power quality has become one of the most prolific buzzwords in the power industry sin ce the late 1980s. It is an umbrella concept for a multitude of individual types of power system disturbances. The issues that fall under this umbrella are not necessarily new. What is new is that engineers are now attempting to deal with these issues using a system approach rather than handling them as individual problems. The issues related to power quality include a large range of phenomenon which takes place on electrical systems. There are many factors impacting on the quality of electricity: electrical supply continuity, voltage stability, frequency stability, harmonic frequencies, and noise. II.
PSS ADEPT
SOFTWARE
A. Purposes PSS/ADEPT (The Power System Simulator/Advanced Distribution Engineering Productivity Tool), produced by Shaw Power Technologies is designed for engineers and technical staff in the electricity sector. PSS/ADEPT is a very powerful and useful tool for designing and analyzing the power distribution networks. B. Special features PSS /ADEPT is the development of the old software PSS/U. Unlike the PSS/U, which runs on DOS environment, PSS/ADEPT was developed based on Windows operating systems so it has more advantages. With intuitive graphical interface, PSS/ADEPT allows users to design, edit and analyze the grid diagram and the grid pattern on the screen directly. Taking advantage of memory management features from the Windows OS, allow to compute with limitless number of
nodes. This problem does not depend on the ability of the software but on RAM and CPU. It can exchange data with other software that runs on Windows environments such as Excel, Access .etc. The utility modules and support of PSS/ADEPT are very useful for the management of the distribution grid. C. PSS ADEPT modules Load flow: The problem of the distribution of power (Power flow): analysis and calculation of voltage, current, real power and reactive power as well as the phase angle of each branch and each particular load. Short circuit: Short-circuit problem (Fault-module): Short circuit calculations at all nodes on the network, including short circuit as phase 1, phase 2 an d phase3. MSA: Problem Analysis to start the engine (motor starting analysis) the line voltage and inform voltage sag at all nodes and branches in a grid when starting an electric motors in that grid. TOPO: The problem of calculating the tie open point optimization TOPO (Tie Open Point Optimization): find the open points with minimum power loss on the 3-phase grid. CAPO: Problem of CAPO (Optimal Capacitor Placement), optimal capacitor set: find the optimum point to set the static and dynamic capacitors for maximization economically efficient. Protection and coordination: Analyzing protection and coordination devices on the network when the problem occurred. This module comes with a very large database of devices and adequate protection parameters needed, and also allows users to edit and update additional protection devices. Harmonic: The problem of harmonic analysis (Harmonics): analysis of parameters and the influence of harmonic components on the grid. DRA: The problem of reliability analysis on the grid (DRA-Distribution Reliability Analysis): calculated parameters on grid reliability as SAIFI, Saidi, CAIFI, CAIDI .etc.
III.
POWER Q UALITY ISSUES
IN
DISTRIBUTION G RID
A. Voltage sags Voltage sags is related power quality problems. It is usually the result of faults in the power system and switching actions to isolate the faulted sections.
It is characterized by RMS voltage variations outside the normal operating range of voltages. Two related values describing the voltage sags phenomena are the voltage sags (as a percentage of the level) and the duration of the phenomenon. Special case of the voltage sags when th e voltage value too low (less than 10% of the prescribed norms). It called blackout.
will be very serious; it can make all the engines inoperable. High currents and voltage sag can damage the engine. Motor speed controller system: Voltage sag makes the speed controller cannot provide the proper voltage to the engine. The control circuits which are supplied power from the grid are inoperable. Making th e current excessively high at the end and voltage returns to normal. One – phase voltage sag will make the power supply to engine unbalance. Computer system: If the voltage amplitude decrease more than 10%, it will damage data stored on magnetic disks or memory if the computer does not have the automatically tracking backup data when detect the voltage sag. Computer cannot work and it will automatically shut down. Lighting system: It reduces lights or lighting devices life and do not provide enough light intensity needed. Solutions:
Fig.1. Voltage sags [1]
Causes:
Voltage sags is generally caused by faults (short circuits) on the utility system. It leads to a voltage drop along the impedance of the system with increased levels with distance from the survey point to the breakdown point make the current increase. Types of cause:
The incident on the transmission or distribution lines causes voltage deterioration at all of the positions of clients. Time is affected, depended on the time of impact protection equipment (circuit breakers, fuses). Protection equipment will be isolated the incident area from the network makes the electricity of customers in this region interrupted (short or long).
The process of switching off the large loads in the system, such as asynchronous motors, synchronous motors, boilers, boiler using electric arc .etc.
Effects: The regions which are sensitive to voltage sags are the production lines, lighting and safety systems (such as in hospitals, airports, buildings); computer systems (data processing, banking and telecommunications), the protection devices of the grid. Especially: Asynchronous motors: When the voltage sag occurs, torque (proportional to the square of the voltage) will drop dramatically depending on the voltage amplitude and the duration of the phenomenon or may make the engine stop spinning. When the voltage sag ends, the voltage returns to normal, engine will need to be accelerated again. It absorbs
the current which has the same value with the engine’s starting
current and it will continuously lead to voltage sag. If there are multiple engines accelerating at the same time, the problem
Reduce the incidence of voltage sag (or blackout): The power company needs to improve the reliability of the technical infrastructure (maintenance and repair routinely, using underground cable system) and rearranging the system, shortening the length of feeders are also useful method to reduce the voltage sag. Methods to reduce voltage sags: Increasing the efficiency of using loop by adding substations and connecting equipment.
Increasing the efficiency of using protection equipment (selectivity, automatically reboot, remote control) to minimize the number of unnecessary impact Increasing the short circuit power of the system. Reducing power consumption of the large electrical machine having frequent switching frequency by using reactive power capacitors or the soft start up equipment which not increase the currents.
Compensating power to industry and service loads: Compensating power directly to the load area when there is the possibility of voltage sag by the energy storage device located between the station and the load. The ability to provide of the reserve devices should be sufficient to provide in more time than the duration of th e incident.
Compensating power to power source: Some loads are not important to be able to endure voltage sag and blackouts during that period. Thus we can prioritize pay attention to the critical load level higher. Otherwise, use the UPS which can stabilize voltage and keep the power to supply to the device for some time after blackout. B. Voltage swells Voltage Swell is the increasing in the RMS voltage level to 110% - 180% of nominal, at th e power frequency for durations of ½ cycles to one (1) minute. It is classified as a short duration voltage variation phenomena. Voltage swell is basically the opposite of voltage sag or dip.
TABLE 1: V OLTAGE SWELL T YPES [11]
Causes:
Voltage swells are usually associated with system fault conditions - just like voltage sags but are much less common. This is particularly true for ungrounded or floating delta systems, where the sudden change in ground reference result in a voltage rise on the ungrounded phases. In the case of a voltage swell due to a sin gle line-to-ground (SLG) fault on the system, the result is a temporary voltage rise on the un-faulted phases, which last for the duration of the fault.
Voltage fluctuations are caused when loads draw currents having significant sudden or periodic variations. The fluctuating current that is drawn from the supply causes additional voltage drops in the power system leading to fluctuations in the supply voltage. Loads that exhibit continuous rapid variations are thus the most likely cause of voltage fluctuations. Effects: The foremost effect of voltage fluctuations is lamp flicker. It is harmful to human vision. Voltage fluctuations are the series of voltage sag and voltage swell, so it also has th e same effect as them. Solutions: Increasing the fault level at the point of connection. Strengthening the system or reconnecting the offending load at a higher voltage level can achieve this. Decrease the reactive power flow through the network due to the load. This may be achieved through the use of a Static VAr Compensator (SVC) and will help reduce voltage sags. Strengthening the network reactive power compensation. A larger number of smaller capacitor banks distributed throughout a system will allow finer tuning of reactive power requirements. C. Voltage unbalance
Fig.2. Instantaneous Voltage Swell Due to SLG fault [11].
Voltage swells can also be caused by the de-energization of a very large load. The abrupt interruption of current can generate a large voltage. Moreover, the energization of a large capacitor bank can also cause a voltage swell, though it more often causes an oscillatory transient. Effects: Although the effects of a sag are more noticeable, the effects of a voltage swell are often more destructive. It may cause breakdown of components on the power supplies of the equipment, though the effect may be a gradual, accumulative effect. It can cause control problems and hardware failure in the equipment, due to overheating that could eventually result to shut down. Also, electronics and other sensitive equipment are prone to damage due to voltage swell. Solutions: Disconnect all capacitors during light load. Reducing excessive discharge of the capacitors by using the static capacitors. Using a CB against leakage. Using lightning arrester. Voltage fluctu ations
Voltage fluctuations can be described as repetitive or random variations of the voltage envelope due to sudden changes in the real and reactive power drawn by a load. The characteristics of voltage fluctuations depend on the load type and size and the power system capacity. Causes:
Voltage unbalance is the phenomena that the RMS value or the phase angle between the three phases is not equal. As it will appear the negative sequence and zero sequence voltage component on the grid. The level of unbalance can be calculated by the ratio of the negative sequence component (U2) (or zero sequence (U0)) divided by the positive sequence component (U1). Causes: The appearance of negative sequence and zero sequence voltage component derived is from the voltage sag because of negative sequence and zero sequence current from the unbalanced load on the grid (single-phase or two-phase loads on a three-phase grid) The unbalanced short circuit fault (not 3-phase fault) also cause voltage unbalance until protection equipment completely solve the fault. Effects: The foremost effect of voltage unbalance is on the three phase asynchronous motor. In fact, the zero sequence reactance component of the engine equivalent to the starting reactance. The unbalanced current is multiplied so th e motor is warmed. It leads to the decrease in life of motor. Solutions: Balance the three-phase load. Reduce the total resistance at the upstream of the equipment unbalance by increasing the rated value of transformer and increasing the cross section of transmission line. Installing protection devices which arev capable of detecting unbalances. Use with caution LC load.
D. IEEE 1159 Standard of power quality
B. Voltage sag analysis
TABLE 2: IEEE S TANDARD OF POWER Q UALITY [1]
Fig.3. Ba Thang Hai feeder voltage sag chart.
Fig.4. Su Van Hanh feeder voltage sag chart.
Tri Phuong Voltage Sag 8.700
IV.
APPLYING PSS ADEPT IN A NALY SIS DISTRIBUTION SYSTEMS OF DISTRICT 10 AND 11
A. Feeders descriptions TABLE 3: PARAMETER OF 3 C HOSEN FEEDERS
e8.650 g8.600 a t l 8.550 o V8.500
Va Vb
8.450
Vc 1
6 11 16 21 26 31 36 41 46 51
Node Fig.5. Tri Phuong feeder voltage sag chart
The result from PSS ADEPT shows that: The aggregate voltage sag of Ba Thang Hai feeder is là ∆Va=0.716%, ∆Vb=∆Vc=0.715%, corresponding to the voltage sag is 62V for each phase, the maximum voltage deviation at every node on each phase fluctuate in the range of 13-38V. The total voltage sag of Su Van Hanh feeder is
ΔVa=ΔVb=ΔVc=0.536%, corresponding with a value of
voltage sag is 47V for each phase, voltage deviation at every node on each phase is equal to 0.
The total voltage sag of Tri Phuong feeder is
ΔVa=ΔVb=ΔVc=1.53%, corresponding with a value of voltage sag is 133V for each phase, the maximum voltage deviation at every node on each phase is equal to 0. From the above results, we found that the voltage sag and voltage deviations on each feeder of PSS ADEPT which are very small value or equal to 0, this results matched with voltage sag requirements of electrical industry standard
ΔV=5% and in accordance with the actual characteristics of
Phu Tho power company due to reasons: Distance between each node is short. Feeder line has large cross section, standardized by lines with cross section of 240 mm2, the resistivity of 0.136
Ω/km.
The transformers distribute on medium voltage lines using three phase machines, thus load between phases on each line is balanced. Capacitors are installed on each feeder to improve the voltage.
At node C81-MC871, PSS ADEPT reports 3 phase-ground value is 14.62 kA and phase-ground value is 20.02 kA, corresponding to reality is 14.5 kA and 19.87 kA. Thus ∆3phg=0.83%, ∆ph-g=0.75% At node TP34, PSS ADEPT reports phase-ground value is 4.42 kA, correspoding to reality is 4.53 kA. Thus ∆ph -g=2.43% From the comparison above, we can see that the errors between the PSS ADEPT results and the reality are very small. Therefore, the results of PSS ADEPT are appropriate and accurate. V.
The paper has presented the analysis of voltage sag of several distribution network in Ho Chi Minh City. The results are acceptable. The calculation and comparison of short circuit currents in the networks have also performed. The simulated and practically measured results have been shown to be consistent. R EFERENCES
C. Short circuit analysis TABLE 4: C OMPARE PSS ADEPT R E SULTS WITH THE PRACTICIAL R E ULTS
CONCLUSION
[1]
IEEE Standard 1159-1995, IEEE Recommended Practice for Monitoring Electric Power Quality.
[2]
http://en.wikipedia.org.
[3]
http://www.omniverter.com.
[4]
http://www.scribd.com/doc/101480652/PSS-ADEP.
[5]
Roger C. Dugan, Surya Santoso, Mark F. Mc Granaghan and H. Wayne
Beaty “Electrical Power Systems Quality 2nd Editio n”. McGraw Hill. ISBN:007138622X.
[6] Nguyễn Hữu Phúc, Đặ ng Anh Tu ấn “Giáo Trình Tậ p Huấn Sử Dụ ng Phầ n Mềm Phân Tính Và Tính Toán Lưới Điện PSS ADEPT”. Khoa điện điện tử HCMUT - Công ty Điện Lực 2. [7]
Hồ Văn Hiến. “Thiết K ế Hệ Th ống”. NXB Đạ i Học Quốc Gia.
Lê Kim Hùng, Đoàn Ngọc Minh Tú. “Ngắ n Mạ ch Trong H ệ Thống Điện”. [9] Nguyễn Hoàng Vi ệt.”Các Bài Toán Tính Ngắ n Mạch và Bả o Vệ Rơle”. NXB Đại Học Quốc Gia TP.HCM. [10] PSS ADEPT™ User Manual. [11] Syed A. Nasar “Theory And Proble ms of Electric Power Systems”. [8]
To compare the short circuit results of PSS ADEPT with reality, I will typically choose short circuit data at 2 nodes on each feeder: Ba Thang Hai feeder: At node C83-MC881, PSS ADEPT reports 3 phase-ground value is 14.61 kA and phase-ground value is 20 kA, corresponding to reality is 14.5 kA and 19.87 kA. Thus ∆3phg=0.76%, ∆ph-g=0.65% At node BTH2.15.1, PSS ADEPT reports phase-ground value is 6.11 kA, correspoding to reality is 6.2 kA. Thus ∆phg=-1.45% Su Van Hanh feeder: At node C82-MC876, PSS ADEPT reports 3 phase-ground value is 13.2 kA and phase-ground value is 18.1 kA, corresponding to reality is 13.17 kA and 18.07 kA. Thus ∆3ph-g=0.23%, ∆ph-g=0.17% At node SVH2.5.2, PSS ADEPT reports phase-ground value is 7.23 kA, correspoding to reality is 7.03kA. Thus ∆phg=2.84% Tri Phuong feeder:
McGraw Hill. ISBN:0070459177. [12]
J Duncan Glover, Mulukutla S Sarma, Th omas J Overbye “Power System Analysis and Design 5th Edition”. Tho mson Engineering. ISBN:1111425779.
[13]
Ewald Fuchs,Mohammad A. S. Masoum, “Power Quality i n Power Systems and Electrical Machines”. Elsevier Academic Press. ISBN:9780123695369.
Alexander Kusko, Marc Thompson, “Power Quality in Electrical Systems”. McGraw Hill. ISBN: 9780071470759. [15] J.B. Dixit, Amit Yadav, “Electrical Power Quality”. University Science [14]
Press.
An Approach Designing SCADA Developer with Kernel Structure and XML Technology on iOS Pham Hoang Hai Quan, Nguyen Van Phu, Le Hong Hai, Truong Dinh Chau Department of Automatic Control Faculty of Electrical – Electronics Engineering Ho Chi Minh City University of Technology, Vietnam
[email protected],
[email protected],
[email protected],
[email protected]
Abstract — Supervisory Control and Data Acquisition (SCADA) suitable to distributed control systems is increasingly used in a wide range of industries. One of the main problems when applying SCADA to real systems is the closed and complicated structure of SCADA software. Another problem is that SCADA software applications are almost developed on Window. This paper proposes an open architecture of engine for designing SCADA applications on iOS to reduce the implementation efforts. The proposed architecture offers an open and flexible solution included many advanced functionalities to design and operate these applications on iOS. Especially, this proposed architecture is designed on iOS devices (Ipad, Iphone) which connect to industrial devices through Wifi so the system management becomes much easier and more convenient. Design pattern, Kernel structure and XML (eXtensible Markup Language) technology are used to make the openness of the architecture. Keywords — SCADA ; h ierarchical archi tecture; design pattern ; XML.
I.
I NTRODUCTION
SCADA is known as the process of collecting data from sensors and instruments located at remote sites to monitor, reserve on computers and applying commands to control a plant or equipment in industries. These systems encompass the transfer of information between one or more SCADA central host computers, a number of Remote Terminal Units (RTU) and/or Programmable Logic Controllers (PLC), and the operator terminals. The SCADA host is usually an industrial PC running sophisticatedly. The key functions of this software are:
Information data acquisition from controllers located in the low level.
Saving the obtained data in storages
Processing of obtained information
Graphical interpretation
Receiving the command from operator and transfer them to controllers Event registration regarding to control process and personal actions
Prevention or notification about events and alarms
Reporting
Data exchange with enterprise automated control
Direct automatic control of processes
With the increasing demand for mass production, productivity, quality, and safety, the current industrial systems become much larger and greatly more complex than those of the earlier years. They need the advanced SCADA software with the following features: long-term reliability, flexible operation, and reasonable cost to monitor and control all of their distributed devices in widespread areas [1]. However, the existing SCADA software packages in the automation market are incredibly expensive. Most SCADA software vendors charge users by how many tags or clients they use. The larger the system is, or grows to be in the future; the more money users are forced to spend for software alone [2]. In addition to this, the structures of these programs are usually so complicated that it takes a huge amount of time to design, modify, and implement the system. Besides, SCADA softwares are almost developed on Window and using computer to run them. So they connect to the systems or devices through wire that is very complicated. Therefore, it is understandable, why many small and medium applications were not equipped with SCADA. As a result, those systems are never all that they could and should be. To address these limitations, this research aims to develop a new design approach to SCADA software on iOS. The first advantage of this solution is that the system management is become much easier and more convenient because this SCADA software connects to the systems or devices through Wifi. Besides, this solution is based on hierarchical architecture and XML serialization technology. So the second advantage and also the most important advantage of this design is that each SCADA project is saved into a single XML configuration file. This feature helps system developers modify and store the project much easier. Especially, users can edit the configuration file by using any text editor instead of specific SCADA software. As a result, the tedious efforts required in the design phases of a SCADA project are reduced significantly. Moreover, the simplicity, generality, and usability of XML increase the adaptability of SCADA projects
as well as the upgradeability to tackle future requirements. Furthermore, this SCADA software is also support an advance function: Expression. With this, you can handle a lot of complicated mathematical expressions, which are not available in the graphical design tool. II.
@Interface -
new
new
@interface I { ... }
DESIGN PRICIPLE
This design used class inheritance and interface to handle the interconnection between its components. The overall pattern of the proposed design is shown in Fig. 1. The design can be divided into three main objects:
@Interface -
Kernel includes many tasks and technologies that offer SCADA features: data acquisition, control, database storage. It will be described more specifically in the following parts of this paper. Kernel’s components are derived and implemented by design tool and runtime engine. Design tool is a graphical user interface application that allows system developers to add, edit, and remove components of SCADA projects. Project creation and configuration, HMI designer, data communication configuration, script source file creation and edition, etc. are all conducted by design tool. The designed SCADA system is saved into an XML configuration file (and some script source files if exist) that will be loaded and executed by runtime engine. Runtime engine is the execution environment of SCADA projects. After loading project file, runtime engine will create and link designed components of the SCADA system. From this point, operators could monitor and control processes in remote sites. During runtime phase, some report or trend files could be created, and pre-defined information could be stored for later use.
Base Object: B
@interface B { ... }
Proprietary data members
NSString *Name ; int ID ; …
Base Object: B Proprietary data members Component C # import “new .h” @interface C: { ... }
Fig. 2. Concept of kernel’s component.
III.
K ERNEL ALGORITHM
A. Concept of Kernel’s Component Fig. 2 shows a basic structure of each kernel’s component. Generally, each component inherits its base class and implements some interfaces. The access to a component is handled via its interfaces. Each component also has its own proprietary data members such as variables of other types, methods, and events to describe a specified object of SCADA systems. This structure enables us to create new objects that implicit reuse the behavior defined in other objects, which results in greater maintainability through the elimination of duplicate code. Meanwhile, interface inheritance is not vulnerable to the tight coupling that compromises the flexibility and extensibility of the structure. This is particularly convenient when developing additional components for the established system.
Project Folder Developer
Operator
XML Config File Report Files Data Logging Files
Design Tool Create and modify project HMI Designer Data communication configuration Create and edit Expression ...
Kernel Devices, Tasks, Displays, Expression, Seriallizer,...
RTUs
Scada SoftWare
DataBase
Devices PLCs
Runtime Engine Real – time HMI Controlling Data Logging Reporting Expression Execution ...
...
Fig. 1. Overall structure of the proposed design.
Fig. 3. Hierarchical structure of the kernel.
B. Hierarchical Architecture Since SCADA systems have a hierarchic structure, the proposed solution used hierarchical architecture to organize the components of SCADA systems. This architecture improves not only the efficiency and manageability but also the upgradeability of the design. Moreover, it eases the serialization process from the SCADA system into an XML configuration file.
process allows the SCADA system to get data form devices as well as set value to variables in these devices. Tasks have a user-defined refresh rate to manage the data communication process.
Fig.3 illustrates the hierarchical tree of the SCADA system. The proposed topology consists of multiple layers to organize and allocate the components of a SCADA system. The highestlevel object, which abstracts the whole SCADA system, includes every component of the system such as devices, tasks, displays, alarms, etc., and these objects could consist of many descendants. The communication between users and these objects as well as their interactions are implemented properly by unique hierarchical path of each object.
Task contains data elements of the SCADA system called Tags. A Tag represents a single value monitored or controlled by the system. Tags can be either I/O Tag or memory Tag . I/O Tag or external Tag directly connects with an address or a register of an I/O device to read and write data. Whereas, memory Tag (internal Tag ) is an internal variable of the SCADA program, which is used to provide additional information. Memory Tag usually archives result of logic and math operations applied to other Tags. Tags are normally stored as value-timestamp pairs: value data and the timestamp when it was recorded or calculated. A series of valuetimestamp pairs gives the history of that Tag . The relationship between Tasks, Tags, and devices could be found in Fig. 3.
C. Device Connections
E. Display
The data communication network is the backbone of any SCADA system because it provides a pathway to transfer data between data acquisition devices, control units and the computers in the SCADA central host. There are several approaches for communication with devices but with this design we use Modbus TCP/IP protocol.
Display manages all data to be monitored by the operator, and all control actions requested by the operator [3]. Each Display is a window or popup window that appears on iOS device screen. Display contains a set of Display Tags that allow users to interact with the SCADA system. Display Tag associates with a specific Tag to monitor and control the value of this Tag . This connection is described in Fig. 3.
he proposed design abstracts device types to corresponding device classes. These classes implement an interface that has essential functionalities to transfer data between iOS device and I/O devices: set_address, connect, disconnect, set_value, get_value, etc. Nevertheless, the implementation of these functions might be different and depend on the type of device. The principle of device connection is shown in Fig. 4.
The Display Tag objects of the proposed design primarily based on Runtime Form Controls. Each type of control has its own set of properties, methods, and events that make it suitable for a particular purpose. For instance, label, textbox, progress bar, real-time trend, etc. are used to display Tag value. Other controls such as button, switch, slider, etc. are usually used to set Tag value. In addition, special Display Tags can switch between Displays, or even specify how an expression is executed. The advantage of using Runtime Form Controls is that users can move (drag and drop), resize, and modify many properties of these components conveniently by using the fingers. They also enhance the graphical interaction between the operator and the SCADA system at runtime. IV.
USING XML I N SCADA
XML stand for eXtensible Markup Language and is a markup language that defines a set of rules for describing data in a format that is both human-readable and machine-readable. XML is slowly finding its way into industrial automation and is replacing many of the proprietary vendor protocols [4]. This research used XML to store information about a SCADA system in a single XML configuration file. This approach can be easily implemented by using serialization methods.
Fig. 4. Approaches for device connection.
D. Task Each Task component of the proposed design handles a real-time communication process with I/O devices. This
Serialization is the process of converting an object into a stream of bytes in order to store the object or transmit it to memory, a database, or a file. Its main purpose is to save the state of an object in order to be able to recreate it when needed. The object is serialized to a stream, which carries not just the data, but information about the object's type. From that stream, it can be stored in a database, a file, or memory.
structure of this configuration file can be briefly described as follows. device_1
... Task_1
... tag_1
...
... Display_1
...
...
When opening a saved configuration file of a SCADA project, a reverse process called deserialization is carried out to reconstruct the SCADA system from its configuration file. V.
EXPRESSION
Expression is an advance function that allow developers to perform complex mathematical expressions between tags such as addition, subtraction, multiplication, and division (+, -, *, /) or AND, OR, XOR, NOT. This is actually very useful when users want the program to perform complex mathematical expressions between tags that they cannot calculate by hand. This function is integrated in a lot of Runtime object such as button, text field, image, label… This increases the flexibility and customization of the SCADA systems.
Fig. 5. Serialize SCADA system.
The process of serializing a SCADA project to an XML file is illustrated in Fig. 5. This process browses through the SCADA system and converts the properties of each component to a system of nested XML Elements. After all components of the SCADA system are serialized, the root element is added to a XML document, which can be saved as an XML file. The
Fig. 6 shows how to edit an expression of button in design phase. After editing the expression, user can compile the SCADA software to find errors. The found errors and their description will be displayed in the alert view. Users can base on these messages to identify the problems and correct the expression. Fig. 7 illustrates how a script is executed when an operator runs a SCADA project. In this SCADA software, we defined two types of expression to enhance the performance of the SCADA system:
No Return Expression: special expression for button. Result of this expression will write directly to devices. Ex: tag2 = tag2 *3 + tag1;
Return Expression: expression for the remaining objects. Result of this expression will be used for another function in SCADA software such as Animation, On-Off… Ex: tag1 OR tag2.
necessary modification in the device modeling had been carried out, it could connect correctly with I/O devices in SCADA laboratory of the Ho Chi Minh University of Technology. More specifically, the developed software could connect to devices using Modbus (the most common industrial standard protocol): Modicon Premium PLC, Twido PLC, Advantys OTB (Schneider Electric), and S7-200 (Siemens), etc.
Fig. 5. Editing an expression of button in design phase.
Fig. 7. SCADA software based on the proposed design.
VII.
DISCUSSION
This design was developed on iOS devices, which have advanced individual data security. No way to access its data by other device. So all data of operator are completed isolation with the outside.
Fig. 6. Expression execution at runtime.
VI.
R ESULTS
A new SCADA software package was built to test the functionality of the proposed architecture. The developed software was programmed with Objective - C language on iOS platform. This software provides a user-friendly graphics editor with many useful design tools. It allow user to use predefined objects including e.g. button, image, progress bar, etc. to develop intuitive SCADA systems. Fig. 8 shows the Runtime form and a few other forms of the developed software. The developed software was tested with some real SCADA systems to evaluate its performance and reliability. These tested systems consisted of many kinds of devices such as PLCs, RTUs, sensors, inverters, etc. They handle different processes such as CIP system, elevator control (process control), Distributed Control System (DCS). There were some connection and compatible problems between devices and the software at the beginning of the testing period. However, after
Further research could be conducted to improve the design by including other advanced functions such as redundancy server, privileged access, web report, etc. Apply this approach to design a new web-based SCADA software is also under consideration. VIII. CONCLUSION This research presents a simple way to design an openarchitecture solution for SCADA software. It could communicate properly with many kinds of I/O devices from different vendors and manage SCADA systems effectively. A SCADA project can be updated and modified easily because it is saved into a single configuration file. The expression function of this design provides users with flexibility for performing mathematical expressions in the SCADA systems. The design pattern makes it easy to a dd new component as well as upgrade old components.
IX.
ACKNOWLEDGEMENT
We would like to thank D. Hoang, H. Van, T. Hiep and everyone in the lab for providing device models and helping us to test applicability of this project. R EFERENCES [1]
P. D. Anh and T. D. Chau, “Component -based Design for SCADA Architecture,” International Journal of Control, Automation, and Systems, vol. 8, no. 5, pp. 1141-1147, Springer, 2010.
[2]
Inductive Automation, 3 Reasons SCADA Software is Going Nowhere, 2011.
[3]
D. Bailey and E. Wright, Practical SCADA for industry, pp. 12-17, Newnes, 2003.
[4]
R. Fan, L. Cheded, and O. Toker, “Internet-based SCADA: a new approach using Java and XML,” Computing & Control Engineering Journal, vol. 16, no.5, pp. 22-26, 2005.
[5]
T. D. Chau, An object-oriented design approach for SCADA kernel, 2013, http://www.scadasummit.com/Event.aspx?id=832628#dr_truong_dinh_c hau.
[6]
D. Mark, J. Nutting and J. LaMarche, Beginning iOS 5 Development, Apress, 2011.
[7]
D. Mark, J. Nutting, J. LaMarche and F. Olsson, Beginning iOS 6 Development, Apress, 2013
Design a Self-Tuning-Regulator for DC Motor's Velocity and Position Control Tan-Khoa Nguyen Department of Automatic Control Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected] Abstract — This
paper presents the design, implementation, optimization and experimental results of a Self-TuningRegulator (STR) with Recursive Least Square Estimation Algorithm and Model Reference Adaptive Control. The controller was embedded on the STM32F4 ARM -CortexM4 MCU and proved to be a better controller for arbitrary DC motor’s velocity and position in efficiency, accuracy and steadiness when compared with the non-adaptive popular counterpart PID controller. Moreover, the advanced ARM MCU also brought an advantage in reducing the hardware expense while still guaranteeing the system’s performance.
II.
THE STR ALGORITHM
Keywords — DC motor, controll er, adaptive, STR, MRAC, PI D
I.
I NTRODUCTION
Fig.1: The PID Controller algorithm
Direct current (DC) motors have been widely used in The controller output is calculated based on the errors many industrial applications due to precise, wide, simple, between the set point and motor response. and continuous control characteristics. This control is advantageous for its simplicity and efficiency in fixed circumstances. Its main disadvantage is Traditionally, PID controllers are often used to control DC that PID controller requires the exact mathematical modeling. motors because of its simplicity and efficiency in unchanged Furthermore, the performance of a PID’s controller is circumstances. Nevertheless PID controller requires exact questionable if there is any variation of the parameters in the mathematical modeling, the performance of the system is mathematical model. questionable if there is any parameter variations. Therefore, An alternative of PID is the STR controller. Though more there is a need for some sort of adaptive controllers which is complicated, STR controller can fix the disadvantages of PID capable to adjust rapidly to change of dynamics of the because the controller output is calculated by the model process, disturbances and noises from the environment. reference adaptive law so that the actual response of the motor follows the response of the reference model which can created In recent years, the applications of the STR are very perfectly in Matlab. promising especially in systems which parameters fluctuate, or are initially uncertain owing to its learning ability, fast adaptation, real time process identification and the development of high performance and cost-effective MCUs. Before this paper, there were several researches about the STR DC motor’s speed controller as in [4], [5]. However, those experimental systems required exclusive and bulky DSP controller kits which made this method quite hard to approach for standard applications. Consequently, this paper will depict how to implement the STR’s algorithm on high-performance ARM® Cortex™-M4 STM32F4 with algorithm flowcharts, estimation algorithms, modifications and provide actual results in comparison with PID controller.
Fig.2: The STR algorithm
Bellow are the advantages of an STR controller: Very promising in fluctuated or initially uncertain systems.
Same response in any circumstances. Learning ability, fast adaptation.
The disadvantage of STR controller is due to its complexity, thus a high performance MCU like ARM CortexM4 are required. TABLE 1 THE STR MODULES DESCRIPTION
TABLE 3: THE ESTIMATOR PARAMETER DESCRIPTION
Parameter
Description Contain actual motor transfer function parameters Initial value of matrix θ(k)
Matrix θ(k)
Matrix θ(0)
Module The Estimator
Input
u(k), y(k)
Output a1, a 2, a3, b1, b 2, b3
The Controller
a1m, a2m, a3m, b1m, b2m, b 3m, a1, a2, a3, b1, b 2, b3, y(k), uc(k)
u(k)
The Reference Model
Percentage of Overshot, Settling time
a1m, a2m, a3m, b1m, b2m, b 3m
Table 1 depicts the inputs and outputs of three main blocks in the STR algorithm respectively.
Matrix
Matrix
Matrix Matrix Matrix λ
Estimated motor transfer function parameters Contain the set points and velocity/position values Estimated error generated by the estimator The Kaman gain matrix The covariance matrix Initial value of matrix Forgetting factor
Table 3 explains the meaning of parameters in the least square estimation algorithm function.
̂ {
A. DC motor’s velocity control estimation algorithm TABLE 2 THE STR PARAMETER DESCRIPTION
Parameter uc(k) u(k) y(k) a1, a2, a3, b1, b 2, b3 a1m, a2m, a3m, b1m, b2m, b 3m
Description Set point(desired velocity/position) Controller output (motor’s voltage) Motoroutput(current velocity/position) Actual motor’stransfer function parameters Reference model’stransfer function parameters
Table 2 describes the meaning of each input and output declared in the Table 1. III.
DESIGN OF THE STR
The STR algorithm which is summarized in the block diagram below can be embedded in any microcontrollers. The faster the MCU is, the shorter the controller loop is, and the more efficient the STR can performs.
Step 1 : Initiate first value
: Update the Step 2
and
matrix
Step 3 :Determine the new
and
Step 4 : Compute estimation matrix
transfer function parameters
:Calculate new forgetting factor Step 5
Fig.3: The STR algorithm block diagram
Step 6 : Calculate the new
matrix
matrix
and detachthe
{ ∑ {
Step 7 : Go back to step 2
Modifications applied:
1. The dynamic forgetting factor as in [3] eliminated the windup phenomenon of covariance matrix P which may cause instability and unpredictable errorsin the STR. Because obviouslyno computer could calculate infinite numbers of information to give the exact state model of any system, the current estimation method implemented is Recursive Least Square Estimation Algorithm.Only bounded numbers ofparameters collected in each sampling cycle are used to estimate the motor’s current condition. The forgetting factor defined how many previous samples are u sed to compute new estimation parameters. As a consequence, depend on certain circumstances, should be automatically adjusted for better adaptation. Especially when in a steady state,there is almost no variation in the system, if is selected as a constant, matrix P will be continuously divided by and becomes wind upped rapidly. 2. The initialization of parameters estimation vector as in [2]. The initial value of vector is not critical. However, to avoid overshot and inverted initial controller output, if the parameters are unknown, it is better to assume that the system is a single integrator with unit gain. Which means a1 = -1, b1 =1 and the others parameters are zero.
Step 2 : Update the
matrix
Step 3 : Determine the new
and
matrix
Step 4 : Compute estimation matrix
and detach the
transfer function parameters
Step 5 : Calculate new forgetting factor
Step 6 : Calculate the new
matrix
3. The large initial value of covariance matrix P as in [2] made the controller more responsive to adapt better in most conditions with the cost of highly fluctuated controller output and sometimes wear off the actuator. Base on specific system, Step 7 : Go back to step 2 the value of covariance matrix P can be modified smaller or Modification applied: larger. 1. The dynamic forgetting factor
as in [3].
2. The initialization of parameters estimation vector be found in [2].
B. DC motor’s position control estimation algorithm
Step 1 : Initiate first value
and
can
3. The small initial value of covariance matrix P to maintain the stability of the system. Large initial value will cause serious vibration in steady state. 4. System delay d was added to th e position control estimation algorithm. System delay, the time it takes for the control signal to affect the motor position,iscaused by system friction, H-Bridge and MCU’s latency. In the case of velocity control, the delay influence is not significant, but through the stage of integration in the system model, the effect of its in position control become more substantialas in [2].
IV.
DESIGN
OF THE REFERENCE MODEL
Step Response 1
A. Dc Motor’s Velocity Control Reference Model Design
0.9
Step 1: Design the continuous reference model:
0.8
Percentage of Overshot should be 0%:
0.7
√
0.6 e d u t i l 0.5 p m A
0.4
ξ= 1 Settling Time (Tqd) should be 0.5 to 0.1 time of the controllingcycle:
0.3 0.2 0.1 0
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
Time (seconds)
Fig.5: Discrete reference model designed
Step 2: Design the continuous, discrete reference model by
Matlab and calculate a1m, a2m, b1m, b2m parameters:
V.
DESIGN OF THE CONTROLLER
A. DC motor’s velocity model reference control law
G = tf( [4000*4000],
( ) ( )( ) ( ) ( )( ) ( ) Based on [1], the control la w is illustrated below:
conv([1 4000*2 4000*4000],[1]) ) Gd = c2d(G,0.005)
B. Dc Motor’s Position Control Reference Model Design
This progress is similartothe “DC motor’s velocity control reference model design” progress but a pole is added in the B. DC motor’s position model reference control law denominator of the position transfer function’s polynomial. Based on [1], the control la w is illustrated below: Nevertheless, the position and velocity reference model’s step response shape is equivalent because the additional pole is gradually bigger than the old ones. G = tf( [4000*4000*40000] conv([1 4000*2 4000*4000],[1 40000]) )
VI.
Gd = c2d(G,0.005)
EXPERIMENTAL RESULT
Fig.6 is the practical system used to examine the performance and stability of the STR and PID algorithm. Two DC motors with different operating voltage, encoder resolution and load weight, are chosen to show that the STR can control any DC motors effectively.
Step Response 1 0.9 0.8 0.7 0.6 e d u t i l 0.5 p m A
0.4 0.3 0.2
Fig.6: Different DC motors are used to evaluate the adaptive characteristics of both controllers
0.1 0
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
Time (seconds)
Fig.4: Continuous reference model designed
0.045
0.05
A. Experimental results on the same DC Motor The STR and PID controller were used to control the velocity of a DC motor. The evaluation criteria of this experiment are settling time and percentage of overshot. As it can be seen on Fig.7, the STR takes less time than the PID to control the same DC motor.
Fig.8: The STR (bottom) adapted better to parameters variations than PID controller (top)
Fig.7: The STR (bottom) gave better result than the PID Controller (top)
B. Experimental results when motor’s load changed The load was adjusted to be heavier unexpectedly, after approximate 6 seconds, the motor’s load are corrected back to the normal weight. This created 2 fluctuations in the system parameters, which is used to evaluate the adaptive characteristic of both controllers.
Fig.9: The output of STR controller (bottom) was more flexible than that of PID controller (top)
C. Experimental results in precise position control Each controller was used to control the position of a DC motor. The evaluation criteria of this experiment are settling time, percentage of overshot and steady state error.
Fig.13: The output of STR controller. At first, there might be very small overshot, then the STR adapted with the system conditionand removedit. Fig.10: PID controller and the STR with original estimation algorithmusually have steady state error which created motor’s vibration
Fig.10 to Fig.13 depict that the STR can perform exact position control of any DC motor with almost no overshot in fastest settling time. This algorithm created significant advantages for any students or engineers who are developing applications in robotic and automation areas. CONCLUSION The Self-Tuning-Regulatoraccompanied with modifications presented not only has many practical applications in systems which parameters fluctuate continuously or systems which need fast adaptation but also help junior students as a simple but effective DC motor driver to developed their own projects. ACKNOWLEDGMENT Great thanks to Assoc. Prof. Dr. Huynh Thai Hoang, my advisor, my lecturer for graduation thesis at th e University for his invaluable dedication and suggestions.
Fig.11: The STR with optimized estimation algorithmeliminated the steady state error completely with any loads or motors used.
Thank to “Pay It Forward Research Club” and “G-force Research Team” for great encouragement and hardware supports. Thank to Mr. Duy Thanh – Dang for cooperating with the author to bring this algorithm to different types of ARM® Cortex™-M4 MCU. VII. [1]
[2]
[3] [4]
Fig.12: The output of PID controller which fluctuated gradually because of the steady state error.
[5]
R EFERENCES
Thai Hoang – Huynh, Advance control engineering th eory – Chapter 4: Adaptive controllers, http://www4.hcmut.edu.vn/~hthoang/ltdknc/index.htm, 2008. Prof. Zoran Vukic, A tutorial on adaptive con trol Self-tunning approach, University of Zagreb, Croatia, Faculty o f Electrical engineering and Computing, Department of Control and Computer Engineering in Automation, 2000. Captain PasethaSaralak, A Self-tunning controller, Mechanical Engineering Faculty, Chulachomlao Royal Military Academy. Nguyen Duc, Thanh - Nguyen Thi Phuong, Ha - Nguyen Xuan, Bac Nguyen Duc, Hoang, Controlling Speed of DC Motor Using Pole Placement Self Tuning Regulator Algorithm experimented by using DSpace-DS1104 Control Card, Publishing House of Vietnam National University, 2009 Ghandakly, A.A, Design of an adaptive con troller for a DC motor within an existing PLC framework . Industry Applications Conference, 1996. Thirty-First IAS Annual Meeting, IAS '96, Conference Record of the 1996 IEEE
POSTER SESSION
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
A Minutiae-Based Matching Algorithm in Fingerprint Recognition System Hai Bui-Thanh Department of Automatic Control Faculty of Electrical and Electronics Engineering
Hong-Nhat Thai-Xuan Department of Automatic Control Faculty of Electrical and Electronics Engineering
Ho Chi Minh City University of Technology Ho Chi Minh city, Viet Nam
[email protected] Abstract — In this paper we propose a new minutiae-based matching method to match fingerprint image using similar structures. As natural distortion in mi nutiae e xtraction i ncreases false minutiae hence makes it very difficult to find a perfect match. This algorithm divides fingerprint images into two concentric ecli pse regions: inn er and outer - based on the degree of distortion. The result of this research can be applie d for many biometric applications i n future.
Ho Chi Minh City University of Technology Ho Chi Minh city, Viet Nam
[email protected]
ffi
large, this method is extremely e cient and saves much time, which can be used for finding the correspondences for the alignment process. Calculating the alignment and correspond ences between the minutiae p oints proves co stly and time consu ming especially when the trans lational and rotational parameters are large. I.
The algorithm has two stages . In the firs t stag e the aligning of the fingerprint images are done after minutiae points are extracted, and in the second stage, the matching of the fingerprint images are p erformed.
Keywords: D i storti on, min uti ae-based structur es, matchi ng. I ntroduction
Biometric recognition refers to the use of distinctive phys iological (e.g. fingerprint, face, retina, and iris) or beh avioral (e.g., g ait, s ignatu re) characteristics which are also called biometric identifiers. Biometrics offers reliable means of authentication and greater security and convenience than traditional methods of personal recognition; these attributes cannot b e easily sh ared or stolen. The existing approaches for fingerprint matching are minutiae-based and correlation-based. The uniqueness of a fingerprint is due to its unique pattern shown by the locations of the irregular minutiae point of a fingerprint: ridge endings, and bifurcations . The non-linear distortion in the fingerprint images makes it very d ifficult to handle matching as it changes th e geo metrical position of the minutiae points. The regions that are affected will shift the geometry of the minutiae and hence pose a potent ial threat to accep tance of a genuine match. The distortion is due to the pressure applied on the scanner, the static friction, the skin moisture, elasticity, and rotational e ects during th e acq uisition. On the sens or, if the force is no t applied orthogonally to the surface, elastic deformations are formed. The level of distortion increases from the center towards the outer regions. In other words, the d istortion is more towards the boundaries than at the center.
ff
A LGORITHM
A. Pre-alignment
The algorithm has a fast alignment stage after the extracting the binary image. It is b ased on the cen ter of the binary image. The distortion is not u niform through out t he fingerprint image. In other words, the degree of distortion at the regions close to the center is less than the regions that are away from the cen ter. For example, less rotational effect will be observed at the central regions as compared to the regions away from the center. When the force applied on the scanner is not orthogonal, the central regions show a smaller amount of distortion than the others. Therefore, the algorithm separates the fingerprint image into two concentric eclipse regions: inner and outer – as shown in the Fig. 1. For aligning the postseparation image, the algorithm considers only the minutiae in the inner regions, as distortion is comparatively lesser here. The diameter D of the inner eclipse is taken as: the upper D – 0.8 of the length of the fingerprint image, the lower D – 0.8 of the width of the fingerprint image. This value of D proved sufficient to isolate an average number of about 19.34 minutiae points , which is eno ugh for match ing.
In this p aper, the algorithm uses a quick aligning stag e after the extraction of the binary image. This method considers a way to align the binary images by taking the center of the binary image, which is divided into two regions – inner and outer. This method is used to achieve a fast alignment and to set a reference, which gives an excellent approximation of the alignment that has to be done at the later stage using the minutiae points. When the angle of rotation to be achieved is
Figure 1. Inner – outer of fingerprint
104
B. Alignment
Since the distortion, which is the cause for false minutiae, is more on the outer than the inner regions, the algorithm considers the minutiae points in the inner region to find an alignment with the template image. The algorithm has already aligned the minutiae points of fingerprint image using segments. A segment is formed by any pair of two minutiae points in the s ame fingerprint as s hown in Figure 2.
The method that we propos e to align the image is outlined below: 1.
The segments are formed by using all the minutiae in inner of template image and input image.
2.
Consider the minutiae pairs and in the input image and template image respectively.
The algorithm bellows illustrates the alignment process in this paper. Algori thm 1 Alig nment Initialization RotAngle[] = 0 Rotd[]=0 Procedure for ( i = 1 : M ) do for ( j = 1 : N ) do if ( R i == R t) then if (Oi == Ot) then if (Ti == Tt ) then DiffAngle = (a i Rotd = d i end if end if end if
if ( DiffAngle ≤ ThreshAngle ) then
Figure 2. Segments
In order to perform the alignment, the parameters that we us ed were:
The distance
between any two minutiae points.
The angle ( and the minutia direction.
RotAngle = DiffAngle end if end for end for end procedure
) formed by the segment and
The type of minutiae T at the end points of the line formed using the minutiae pair. For example: the end points hav e a bifurcation or a ridge ending, or a pair of bifurcations, or a pair of ridge ending. The angle a of the line joining two minutiae points with resp ect to a reference line, say the X-axis. The distance point.
− a t)
between minutiae point and center
Fig. 3 sh ows an example of these parameters .
The v alue of ThreshAn gle is taken as 2 degrees. This value is very small because the deviation between input image and template image is always small. The angle for which the image is rotated is taken as the average of the array Angle, called thetha. The center to be rotated is taken as the minutiae pairs of its is minimum and called the reference minutiae point.
C. Matching
After the alignment stage, the input image ’s minutiae points will be aligned in accordance with the template image’s minutiae points by rotating around the reference minutiae point. In order to rotate the image, we us e rotation matrix around an arbitrary po int – as sho wn in (1).
(()) | ( ) | The result of this rotation is s hown in Fig. 4.
Figure 3. The parameters of a segment.
(1)
II.
R ESULTS
In order to evaluate the accuracy of algorithm, we have tried the algorithm on a total of 40 distinct fingers. The algorithm was tested on a 1GHz BeagleBone Black device. It was observed that the average time taken for extracting the binary image was close to 2 secon ds . The averag e time t aken for the matching was close t o 1 second or less . In other words, we have calculated two factors: FAR- False Acceptance Rate and FRR- False Rejection Rate for estimating the q uality of algorithm. The false acceptance rate, or FAR, is the measure of the likelihood that the biometric security system will incorrectly accept an access attempt by an unauthorized user. A system's FAR is typically stated as the ratio of the number of false acceptances divided by t he nu mber of identification attempts.
Figure 4. Result of rotation image After the rotation stage, the input image is aligned in accordance with the template. Specifically, every minutiae point of the input image will hav e so me con formity for the position and direction corresp ond ing to a minut iae point of the template image. In the matching, the method us es isolated minutiae ins tead of segment. Every isolated minutiae of input image is compared with isolated minutiae of template image. For every compariso n, the nu mber o f pairs that match is calculated. The details of this proposed algorithm for matching is summarized as in Algorithm 2. A match is possible when the factor f (in %) of the pairs in the inner upper 80%. Algorithm 2 Matching Initialization Score=0 Procedure for ( i = 1 : M ) do for ( j = 1 : N ) do if (( r i - r t) < 3) then if ((o i - o t) < 30 ) then score++ end if end if end for end for f = score/ the number of minutiae in template end procedure
The false rejection rate, or FRR, is the measure of the likelihood that the biometric s ecurity s ys tem will incorrectly reject an access attempt b y an autho rized u ser. A s ys tem's FRR typically is s tated as the ratio of the number of false rejections divided by the number of identification attempts. As s hown in table 1. Table 1. FAR and FRR in some cases. Name ID FAR Nguy ễn Hữu Thanh 01 0/50 Võ Mai Duy Quý 19 0/50 Phan Thành Phát 11 0/50 Tr ần Thanh Hải 17 0/50 Thái Xuân Hồng Nh ật 02 0/50
III.
FRR 6/50 3/50 7/50 3/50 0/50
CONCLUSION AND FUTURE WORK
The proposed algorithm was considerably fast and hence saves valuable time from finding alignment and matching. Future work would b e directed towards increasing the ac curacy of distorted images and u sing n euron network. R EFERENCES [1]
[1] N. K. Rath a, S. Chen, an d A. K.Jain. Adaptive flow orientatio n based feat ure extraction in fingerprint images. Pattern Recognit ion , 28(11): 1657-16 72, Nov. 1 995.
[2]
[2] D. Hung. Enhancement and feature purification of finger-pr int images. Patt ern Recognition, 26(11): 1661 -1671, Nov. 1993. [3] [3] Q. Xiao and H. Raafat, 24(1 0):98 5- 992 . Fingerprint image po stp rocessing: a combined statistical and structural app roach. P attern Recognition, Oct. 1991. [4]
[4] Craig I. Watson, Michael D. Garris,Elham Tabassi, Charles L. Wilson , R. Mich ael McCabe, Stanley Jan et, Kenneth Ko .User's Guide to NIST Biomet ric Image Soft ware [5] An Algorit hm for Fingerpr int Image P ostp rocessing by Marius Tico , Pauli Kuosmanen.
[5]
[6] Opt imized Minutiae – Based Fingerprint Matching by Neeta Nain , Deepak B M, Dinesh Kumar, Manisha Baswal, and Biju Gautham [7] Fingerprint Recognition By WUZHILI (9905 0056)
[6]
∗
High-speed Moving Object Tracking System for Inverted Pendulum Hiep Nghiem - Hong Department of Automatic Control, Faculty of El ectrical and Electronics Engineering Ho Chi Minh City University of Technology Ho Chi Minh, Vietnam
[email protected]
— Tracking moving objects is an important direction Abstract of machine vision. This paper implements a tracking system for moving object on an advanced ARM Cortex-M4 microcontroller using a high-speed image sensor. With an output frequency of 60Hz, this system can provide high-quality feedback for many applications in automation and control. To test the system’s functionality, it was successfully integrated with a vision-based control system for balancing an inverted pendulum. Keywords — machi ne visi on; i mage processing; object detection
I.
I NTRODUCTION
Normally, computer is used in large systems to implement accurate and stable applications such as target recognition, face detection, gesture-based remote controls, and object tracking... However, in a small system, a micro-controller (or a micro processor) is used instead. One has to solve problem on how to exchange the accuracy and stability from computer-based system for low cost and portability of a real-time embedded system. This paper presents how to build a high-speed machine vision for moving object tracking with an output rate at 60 Hz, using one of the most advanced ARM Cortex-M4 microprocessors and the MT9V032 image sensor from Aptina. The problem to solve here was how the MCU with an 168MHz core-clock and 192KB of RAM could read the image from the sensor at 60 FPS and apply the tracking algorithm to this data. The studies in this paper will provide useful ideas for reliable communication between the MCU and the image sensor as well as the color-based method for tracking moving objects.
angle around the central vertical axis corresponds to Hue, the distance from the axis corresponds to Saturation and the distance along the axis corresponds to the Value. With HSV color space, one can find out pixels with their color nearly the same to each other. And in light-change condition, the value should be changed is the range of V value. Further theorem on the HSV color space can be found in [2]. In our system, the output from the MT9V032 image sensor was in RGB format with the Bayer filter pattern as shown in [3][4], thus, before applying the tracking algorithm, we shall convert the picture into HSV color space. B. Color space conversion o
In HSV color space, the range of H value is between 0 and 360o. Three primary colors red, green and blue correspond to o o o the angles at 0 , 120 and 240 . The range of S and V is from 0 to 1. For the implementation on a microcontroller, the values of H, S and V should be scaled between 0 and 255. The algorithm below will convert the RGB data of a pixel to the 8-bit HSV color space. The R, G, B values are also in 8 bit resolution:
2:
3:
IMAGE PROCESSING
A. Choosing color space In image processing, there are several color spaces to represent the data of a picture depending on the purpose and application. An RGB [2] representation of the picture will be suitable for display while CMYK is useful for printing. For a vision-based application, the HSV color space [2] is the most useful due to its description of color in a way similar to how human’s eye reacts to different frequencies of light. HSV stands for Hue, Saturation and Value. The color space is often visualized as a cylindrical-coordinate. On this cylinder, the
4:
This system has also been tested with a vision-based control system for balancing the inverted pendulum, a classic model in dynamics and control theory. II.
= (,,) = (,,) = – − ×42 = − ×42+85 − ×42+170 < 0 = 255 + = = /0, ℎ =0
1:
5:
if
6:
7:
then
=0 = = =
C. Moving object tracking Generally, the problem of tracking moving objects in this paper faces little problem on a computer. With much higher speed and larger amount of memory, a commercial CPU can perform numerous operations in short time. In contrast, with
limited resource, the embedded program for tracking algorithm needs to be computationally economical to meet the specification of the output frequency, even to arguably the most advanced MCU off-the-shelf in this project. This paper suggests a method to track moving objects that was successfully embedded on an ARM-Cortex M4 from ST Electronics.
inverted pendulum to reduce the possibility of losing the object. III.
HARDWARE DESIGN
Fig. 2 shows the system’s hardware block diagram. The MCU in use is an ARM Cortex-M4 STM32F407VGT6. The image data from the MT9V032 image sensor is transmitted to the MCU via the Digital Camera Interface in conjunction with a DMA channel to automatically transfer the data to memory. This configuration assures a data rate of 60FPS for a VGA video stream. The MCU will perform the object tracking program from the internal flash memory at the device’s maximum clock rate of 168MHz. After that, object’s position will be transmitted to computer and visualized by a graphic user interface. Moreover, the GUI also provides a user-friendly interface to setup the H, S, V thresholds for the object detection program. These settings are immediately transferred to the MCU so that user can manually adjust settings for their desired result.
Figure 1: Flow chart for Moving Object Tracking program
Fig. 1 shows the actual steps of the tracking program. Firstly, the devices are initialized for proper operation and communication. After that, the upper and lower thresholds of Hue, Saturation and Values are also assigned to via the user interface. If the H-S-V values of a pixel after being conversion satisfy these limits, the pixel is marked as belonging to the object. After identifying all of these pixels, the average indices of the row and column of the marked pixels will be calculated and sent to the control system of the inverted pendulum. However, this method was observed with significant sensitivity to noise, therefore, before calculating the object position, we need to carry out some filter algorithm on the image to reduce noise and enhance the accuracy of the algorithm. Also, to further reduce the unnecessary computations, we only applied the detection algorithm in specific region of the input image. At the beginning, we will use the object’s initial position as reference point. Assume that the object doesn’t move too fast, we will use the current location of the object as the reference point in the next loop. By this way, the region that the object detection program has to process is focused on that around the object and skip the unnecessary areas out of our calculation. Still, the program can at times lose track and miss the object. However, a state estimator can be used to predict the new position of the object in the control program of the Student’s scientific research topic, Faculty of Electrical & Electronics Engineering, Ho Chi Minh City University of Technology.
Figure 2: Hardware Block Diagram
IV.
RESULT
Fig. 3 shows the completed module. The MCU STM32F407VGT6STM32F4 is in the center of the ST32F4 Discovery Kit and an MT9V032 image sensor module is mounted on the backside of the Discovery Board. Fig. 4 is the graphic user interface that was designed for viewing and adjusting the performance of this module. This software was built with OpenCV library on Linux environment for the best performance.
Figure 3: The tracking system in real view
(a)
The components of this system includes an inverted pendulum with its control module implemented by an LQR controller embedded on the TI’s TM4C123GH6PM ARM Cortex-M4 and the high-speed object tracking system. The object tracking system will provide feedback on the states of the pendulum and the cart to the control module. These two parts will communicate with each other via a CAN interface. This communication interface is to decrease the error on the transmission line and increase transfer rate. The STM32F407 is connected directly to the computer via a USB port so that we can use the graphic user interface to set parameters of the tracking program and observe the tracking result.
(b)
VI.
CONCLUSION
Due to the portability and low cost, machine vision has become more popular and can be found in numerous applications in automation and control. With the power of the microprocessors becoming dramatically higher and higher through time, replacing computer-based system with MCU based machine vision is the reasonable direction for many practical applications.
(c) (a) Raw Image
(b) post-processed Image
(c) Control panel
It should be noted that some problems of this object tracking system still hasn’t been thoroughly solved in the timeline of this paper. The problems of losing object and error in the acquired position of the object still need to be further assessed in the future.
Figure 4: Graphic user interface
V.
VISION BASED CONTROL OF AN I NVERTED PENDULUM
The moving object tracking system was integrated as a feedback sensor for an inverted pendulum using vision based control. Figure 5 shows the general block diagram and Fig. 6 is the complete view for all of the hardware parts in the pendulum system.
VII.
ACKNOWLEDGMENT
Many thanks to Dr. Nguyen Vinh Hao, faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology for his advice to help me finish this project. I also want to thank all members of PIF Club for moral and financial support throughout the process of this project. R EFERENCES [1] [2]
[3]
Figure 5: Block diagram
Figure 6: Inverted pendulum with the moving object tracking system
[4]
Azriel Rosenfeld, “Introduction to Machine Vision”, IEEE Control System Magazine, 1985 Lidiya Georgieva, Tatyana Dimitrova, Nicola Angelov, “ RGB and HSV colour models in colour identification of digital traumas Images”, International Conference on Computer Systems and Technologies CompSysTech’, 2005 Mohammad Aghagolzadeh, Albolreza Abdolhosseini Moghadam, Mrityunjay Kumar, Hader Radha, "Bayer and panchromatic color filter array demosaicing by sparse recovery", Proc. SPIE 7876, Digital Photography VII, 787603 (January 24, 2011). Morgan McGuire, “Efficient, High -Quality Bayer Demosaic Filtering on GPUs”, Graphics, GPU, and Game Tools, 2008
3D Mouse using Inertial Measurement Unit An Nguyen Department of Electronics Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected] Abstract — This paper presents the design, implementation and
results of a 3D mouse using an inertial measurement unit (IMU). Since the 3D mouse’s functionality was successfully demonstrated with a game developed on Unity3D and Microsoft Visual Studio 2010, we believe 3D mouse can replace a traditional mouse. This device could be used as an effective tool for oral presentation with all basic functions such as zooming, switching and pointing. Furthermore, this application is a solid foundation to develop devices used in human behavior recognition. Keywords — inertial measurement unit, Kalman filter, extended Kalman fil ter, uni ver sal asynchr onous receiver/ t ran smit ter, Figure 2: System overview
micro controller uni t
I. I NTRODUCTION 3D mouse works as a replacement for traditional mouse. With an interesting function that is wireless transmission, it is easy to interact with user. Data of IMU is collected and transmitted via UART to computer. The software will calculate the angular information, angular acceleration of on the three special axes to send control values from the mouse.
Figure 1: Stimulation on Unity 3D
II.
HARDWARE OVERVIEW
Hardware of 3D Mouse includes an inertial measurement unit (IMU) GY85, bluetooth module HC05 and the TM4C123 ARM CortexM4 Launchpad from Texas Instruments. The Launch pad will read and transmit data from IMU to laptop via bluetooth.
Bluetooth module HC05 was u sed to transfer information of angle, angular acceleration to PC in h igh speed via UART. Despite that transmission range is limited, the data frame is guaranteed. This HC05 module could have been replaced by several wireless devices for a higher demand of bit rate, correct data as well as range. III. ALGORITHM DESCRIPTION In particular, IMU GY85 would collect th e values about acceleration, angular velocity and electromagnetic amplitude and save them to registers. Timer interrupt interval was 10ms and could continuously transfer the information from IMU to MCU. MCU would estimate the angle of 3 axes in space by implementing the Kalman estimation algorithm. Below is the system flowchart:
Figure 3: System flowchart
IMU GY85 will return nine values which consist of three values of acceleration, three values of angular velocity and three values for magnetic amplitude. The MCU will read this data via the I2C interface and then use these as inputs of the Kalman estimation. Outputs of the Kalman estimator will be the values of the roll, pitch and yaw angles.
3. Good response when extern electromagnetic field interferes. Compensation filter were fundamentally analogous with KF in tackling problem [1]. However, extended Kalman filter( EKF) would have many advantages in problems [2] and [3] because it allowed adjusting individual covariance value of model. Figure 4: Roll, Pitch, Yaw angles
The Kalman estimation program was embedded on ARM Cortex M4 MCU of TI.
IV. EXPERIMENTAL R ESULT Using terminal connected via COM2, baud rate 115200 to read data from IMU
Figure 5: Kalman estimation
Algorithm for roll, pitch, yaw angles was described by matrix, linear algebra equations and dynamic equations The Compensation filter was based on the following principles: estimation values would be the most appropriate in order to minimize the sum of mean square from various data sources (Least-mean-square method). Three input sources were acceleration, angular velocity and magnetic amplitude of 3 axes. Kalman filter(KF) depends on Kalman estimation algorithm which comprised of two main parts: predict the posterior value from the past, then correct based on practical information. This filter was first developed for the need of orbit observation and space n avigation system of NASA. After that, it was widely applied for variety of systems such as digital image processing, controller… Challenges had t o deal with KF were: Previously exact mathematical model of system. Prior knowledge about covariance of noise of measurement or model. There were 3 difficulties (classify from the easiest to the most complex respectively) to tackle with an estimation algorithm for IMU: 1. Satisfactorily static response at normal condition (small extern acceleration, minor magnetic noise) 2. Favorably dynamic response even when extern acceleration exists.
Figure 6: Data send from IMU via UART
From those data after calibration calculated and controlled mouse. On the other h and, errors still remained. Besides, great dealt of environment noise affected the result proved that Kalman filter did not work well as expected. V. R EFERENCES [1] Dan Simon (2006). Optimal State E stimation Kalman, ¾û, a nd Nonlinear Approaches. John Wiley & Sons. [2] Edwin K.P. Chong, Stanislaw H.Zak (2008) An Introduction to Optimization. John Wiley & Sons. [3] E. R. Bachmann et al (1999). “Orientation Tracking for Humans and Robots Using Inertial Sensors”. In 1999 International Symposium on Computational Intelligence in Robotics & Automation (CIRA99) [4] João Luís Marins et al. “An E xtended Kalman Filter for QuaternionBased Or ientation Estimation Using MARG Sensors”. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems.
Design and Implementation of Fuzzy-PID Fuzz y-PID Controller for DC Motor Speed Control Khanh-Cuong Mai-Manh
Thai-Cong Pham
Department of Automatic Control Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected]
Department of Automatic Control Faculty of Electrical and Electronic Engineering Ho Chi Minh City University of Technology
[email protected]
Abstract This paper presents a simplified adaptive PID — This controller with Fuzzy Logic algorithm to control a non-linear system. The aim of this project is to show a method for speed control of a motor with load. For coupled DC motor systems with non – linear l inear and complex characteristics, characteristics, classical PID controller will find difficulties to achieve the desired response. Fuzzy logic control is a method by which dynamic performance and strong robustness are guaranteed. The project compares the performance of the two motor system with classical PID and fuzzy logic. Keywords PI D, f uzzy uzzy logic , sugeno sugeno — PI
I.
I NTRODUCTION
There are several methods to control motor’s speed control. The most common method is PID controller [1] because it’s it’s effective and easy to program. But this method just ia only good for linear system system or stable input. PID controllers constitute an important part at industrial control systems so any improvement in PID design and implementation methodology has a serious potential to be used at industrial engineering applications. The PID controllers which were invented in the 1900s are still used in more than 95% of industrial control loops. They have survived many changes in technology from mechanics and Pneumatics to microprocessors via electronic tubes, transistor and integrated circuits. Present day PID controllers are made by using microprocessors/microcontrollers and using Programmable logic control technology. However, with non-linear system, they cannot achieve the desired response, they have several weakness such as sub-optimization, reduction of quality control if working in a wide range or input changes, no quality assurance if the design of the system model possessing uncertain factor. By 1960, the theoretical basis of modern control was invented and continues to develop, including Fuzzy logic. II.
METHODS
A. System Description Two DC motors are coupled when one motor runs while the second generates power to resist forces acting on it, from that we can make the load. The power will rotate the second DC motor.
Fig 1. Coupling two DC motor
The input is updated continuously by the Ultrasonic Sensor .By measuring the distance to control the velocity the first DC motor. B. Fuzzy Logic Fuzzy logic is a form of many – many – valued valued logic, it deals with reasoning that is approximate rather than fixed and exact. Compared to traditional binary sets ( where variables may take on true or false values) fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy logic has been extended to handle the concept of partial truth, where the truth value may range between completely true and complete false. The term “fuzzy logic” was introduced with the 1965 proposal of of fuzzy set theory by Lotfi Lotfi A.Zadeh In fact, there are very complex objects, have high nonlinearity difficult to control using conventional methods to determine the mathematical model of the object. Human- being with processing of the brain, thought a training and accumulated experience to be enable to control them without knowing their mathematical models
L is a trapezoidal [ -20 -10 -1 -0.7] ZE is a triangle [-0.7 0 0.7] P is a trapezoidal [0.7 1 10 20] Errordot:
Fig. 2 Model
C. The struct of the Fuzzy controller:
Fig. 5 Errordot
Fig. 3 Block Diagram
L is a trapezoidal [ -2 -1 -0.9 -0.8]
Input:
Error: between The Set point and the Output Errordot: by Error present subtract Error before
ZE is a triangle [-0.8 0 0.8] P is a trapezoidal [0.8 0.9 1 2] Output
Output:
Parameter of Proportional, Integral, Derivative Pre - processing:
Normally, Normally, we use preprocessing preprocessing stages to standardize the basis of the variables is in [-1, 1] to easily define the linguistic values In this case, we divided Error by 60 and divided Errordot by 8 Fig. 6 Output variable
The Fuzzy:
Conversion the real value from the feedback output to the fuzzy system can be deduced.
With variable constant:
For KI:
P1 = -0.1
Error:
P0 = -0.05 ZE = 0 L0 = 0.0.5 L1 = 0.1 The Rules: In this case we use the Sugeno fuzzy rule, which is a clause concluded as a function of the input Formula: Inference method: Fig. 4 Error
Max – Max – Prod Prod method
A fuzzy system:
In this project, we used an amplification factor of 100.
If ( x1 is A11 ) and ( x2 is A21 ) then (y is B 1 )
PID Controller
If ( x1 is A12 ) and ( x2 is A22 ) then (y is B 1 ) If
and
then
A proportional – integrating – derivative controller is a generic control loop feedback mechanism widely used in industrial control system A pid controller calculates an “error” values as the difference between a measured process variable and a desired set-point. The controller attempts to minimize the error by adjusting the process control inputs. Algorithm
If
and
then
t
u (t ) K p e(t ) K i e( )d K d 0
d dt
e(t )
Where
K p : Proportional gain, a tuning parameter K i :Integral gain, a tuning parameter
K d :Derivative gain, a tuning paremeter E: Error T: Time or instantaneous time (the present) Fig. 6 The
result of Max – Max – Prod Prod Method
the preset t
And the Rule for Fuzzy logic is: Error
: Variable of integration; takes on values from time 0 to
L
Z
P
L
L1
L0
ZE
Z
L0
ZE
P0
P
ZE
P0
P1
Block diagram:
Errordot
Defuzzification stage: base on focus method Conversion of the fuzzy value output is the real value to control the object.
An open-loop response is taken and the parameters to be improved are listed.
Base on focus Method
y
*
y ( y )dy y ( y ) ( y )dy ( y ) y
k k
y
k
Average of a weight
y*
a ( a) b (b) ( a ) (b )
Pos – Pos – processing: processing:
Fig. 2 A single wire earth return consumer connection transformer
k
k
Constants
Rise time
Overshoot
Settling Time
ess
Decrease
Increase
Small Change
Decrease
K i
Decrease
Increase
Increase Increase
Eliminate
K d
Small Change
Decrease
Decrease
Small Change
K p
TABLE I.
PID RESULTS
Fi . 3 Model S stem
Result PID Controller
Fig. 4 Result Fuzzy PID
DISCUSSION The strength of the project: The Fuzzy logic controller has controlled the speed of motor following closely the fluctuating setpoint
Fig. 2 PID Controller
The controller meets the speed of the motor with the smallest error When the load fluctuates, the controller still meet
The weakness of the project:
When using Fuzzy logic controller, the speed still error about 10~15 pulse /10ms Requiring must have experience
Fig. 3 No Load
ACKNOWLEDGMENT I want to express my gratitude to Mr.Nguyen Tuan An and Mr.Huynh Thai Hoang-lecture of faculty of electrical and electronics engineering, for their enthusiastic support during the implementation of this project. R EFERENCES
Fig. 3 Full Load
[1]
PID control speed motor
[2]
http://www4.hcmut.edu.vn/~hthoang/ltdknc/
[3]
http://en.wikipedia.org/wiki/PID_controller
[4]
Huynh Thai Hoang "Intelligent Control"
[5]
Huynh Thai Hoang "Basic of Automatic Control"
[6]
http://en.wikipedia.org/wiki/Fuzzy_control_system
[7]
Book automatic control
[8]
Google.com.vn
WiFi Controlled Tracked-Car Huynh Trung Bac, Tran Le Duc, Truong Nguyen Minh Trung Faculty of Electrical-Electronics Engineering Ho Chi Minh City University of Technology
[email protected],
[email protected]
Abstract — Brushless DC (BLDC) motors are becoming an increasingly popular motor of choice for low powered vehicles such as mopeds, power assisted bicycles, mobility scooters, and in this reported application, motorized mountain boards. A BLDC motor controller was developed specifically for the motorized mountain board application. The BLDC motor needs the help of sensor to estimate its rotor position details to get controlled by a controlling circuit, which will directly affect the cost of the system. Now the sensor-less controlling mechanisms have been developed which have made the cost lesser and provides better stability. The reported system is further enhanced by several features such as output for a camera on smartphone, Wi-Fi control by using android. Such an efficient sensor-less controlling mechanism is proposed in this paper.
Moreover it can transfer more information at faster rate compare to other wireless communication such as bluetooth or radio. II.
CONTRUCTION, OPERATING PRINCIPLE A ND CONTROL
A. Brushless DC Motor Brushless motors consist of a stationary part, the stator, and a rotating part, the rotor. The space between the stator and the rotor is called the air gap. The stator carries the windings and the rotor carries the magnets. Brushless motors can have inside rotors or outside rotors. These two cases are shown in Figure 1. In either case, the stator and windings are stationary, allowing direct winding access without brushes or slip rings.
Keywords — BL DC, ZCP, Androi d.
I.
I NTRODUCTION
Brushless DC Motor (BLDCM) is gaining popularity rapidly by its utilization in various industries. The BLDCMs do not use brushes for commutation; instead, they are commutated electronically. BLDC have the same characteristic as three-phase synchronous motor which has been widely used in industrial and social electric machinery. The BLDCMs have many advantages over brushed DC motors and induction motors. A few of these are: 1.
Better speed versus torque characteristics
2.
High dynamic response
3.
High efficiency
4.
Long operating life
5.
Noiseless operation
6.
Higher speed ranges
Since there are no brushes in BLDC motor, new method for controlling the motor is a real requirement. There were 3 types: sensor-less, hall sensor and field oriented control. Each type has different advantaged and disadvantaged. In this article we will introduce a sensor-less method since it is a lowcost method and easy enough for controlling. Another aspect of this article is about communication between user and robot. We d ecide to introduce Wi-Fi control over an android interface because it has been familiar with people nowadays. The main advantage of using Wi-Fi is that it can be operated everywhere that can connect to the internet.
Figure 1: The rotor can be on the inside (left) or the outside (right). In either case, the stator, which contains windings, does not rotate and the rotor, which contains magnets, does.[1]
Brushless dc motor is one kind of permanent magnet synchronous motor, having permanent magnets on the rotor and trapezoidal shape back EMF. The BLDC motor employs a dc power supply switched to the stator phase windings of the motor by power devices, the switching sequence being determined from the rotor position. The phase current of BLDC motor, in typically rectangular shape, is synchronized with the back EMF to produce constant torque at a constant speed. The mechanical commutator of the brush dc motor is replaced by electronic switches, which supply current to the motor windings as a function of the rotor position. This kind of ac motor is called brushless dc motor, since its performance is similar to the traditional dc motor with commutators. Figure 2 shows the structure of a BLDC motor. [2] We used the method Zero Crossing Point (ZCP) of the Back EMF (BEMF) is detected directly. The BEMF and current waveforms are as show in the figure 3. For typical operation of a BLDC motor, the phase current and back-EMF should be aligned to generate constant torque.
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
The commutation point shown in Fig. 5 can be estimated by the ZCP of back-EMFs and a 30° phase shift, using a six-step commutation scheme through a three-phase inverter for driving the BLDC motor. The conducting interval for each phase is 120 electrical degrees. Therefore, only two phases conduct current at any time, leaving the third phase floating. In order to produce maximum torque, the inverter should be commutated every 60° by detecting zero crossing of back EMF on the floating coil of the motor, so that current is in phase with the back EMF.
Framework: this is the API / Framework Android developing environment allows Android applications transmit data in and out through the USB port, is provided through the Google SDK. In this subject students do not go deep into the analysis of activities Microchip's Accessory Framework For Android Open Accessory API / open Accessory Framework and USB host stack that will show how to set up a library To use this library on PIC24F microcontroller and Android apps. Microchip's Accessory Framework For Android was written in Microchip's USB host stack. Android Accessory in host mode Driver is on the same client on the Microchip USB host stack.
Figure 2:Cross-section view of the brushless dc motor
Figure 5: Accessory Framwork
Figure 3: BEMF and current of each phase.[3]
B. Robot
Figure 6: USB host mode and Accessory mode.
The library is written optimized for PIC24 and PIC32 series, students in the topic using PIC24F series is supported full speed (12Mbits / s in Device mode, host, OTG) USB 2.0 OTG.
Figure 4: Real robot
III. MICROCHIP ACCESSORY FRAMEWORK FOR A NDROID 1) Microchip's Accessory Framework For Android Microchip's Accessory Framework For Android: is a library of Microchip provides a mechanism to transfer data to and from an Android application via USB for PIC microcontroller Open Accessory API / Open Accessory
Table 1: Configuration Hardware USB .
119
2) Build web server on Android using NanoHttpd NanoHttpd library is a library of free and open source I used a small web server for embedded applications on Android, it is written into a single file * java. As of now it is still being developed by the user. The core features of the web server are: Only a single java file * HTTP support No support its own configuration, log management, connection management. SSL Support Support cookie Support GET and POST PUT Support Supports single and multi-value parameters Supports file upload Do not create caches Unlimited bandwidth, request time or simultaneous connections Occupies a very low memory Connection "keep-alive". 3) Web interface control.
Figure 6: Web interface on laptop.
IV. R ESULT AND A NALYSIS After constructing a prototype and testing it on normal condition, the results are positive with good response and potential to be develop further with more application. Working range of robot is still short since it still depends on stable Wi-Fi connection but in future 3G communication base can be used so the working range will be extended. DC motors which are currently used can be changed to BLDC to improve speed and maneuver. V.
Figure 7: Android interface on smartphone.
The console was built on the site should be able to run on most devices with different operating systems have a wifi connection. Currently, only the optimal interface for browser chrome and windows operating system (with browsers and other operating systems can interface components are not arranged logically). The component of th e interface are: Show camera on the robot. Button controls the robot in manual control mode.
CONCLUSION
Wi-Fi remote control can be proved to be a strong application in nowadays society since you can connect to internet almost everywhere in the world. Web base interface can be access from PC, laptop and especial smartphone which is very popular nowadays. By using it, you can control your device from a long distance. R EFERENCES [1] Shane W. Colton, “Design and Prototyping Methods for Brushless Motors and Motor Control”, May 2010, p8 [2] T.J.E. Miller, “ Brushless Permanent -Magnet and Reluctant Motor Drives,” Oxford, 1989. [3] Merin John and Vinu Thomas, “Modeling, Analysis and Simulation of Sensorless Control of Brushless Dc Motor Based On BEMF Difference Estimation Method”, vol 2, June 2013, p2474
Design and Implementation of Music Glove Quoc-DuongGiang-Hoang Department of Electronics Faculty of Electrical and Electron ics Engineering Ho Chi Minh City University of Technology
[email protected]
— This paper summarizes the design and Abstract implementation of Music Glove project. Music Glove is a project which uses MSP430G2553 microcontroller to play music in response to human i nteraction. The keyboard consists of buttons, either from mechanical buttons or capacitive touch buttons. There are two booster packs combined with a mainboard to create a complete music player. People may call the device a music glove or capacitive piano keyboard or any name that fits their interests since our major purpose is to apply the learned knowledge and to create an entertainment device for everybody to entertain after day work. This study also applies a new knowledge in capacitive sensor which is currently becoming more popular. Keywords — capaciti ve si mul ation, mu si cal note.
I.
sensor,
musi c,
tone,
microcontrol l er,
A. Musica l notes This section discuss about how to make a complete table of frequency of musical notes and give an example of frequency table. The formula which is used in this study is referred from a research “Physics of Music – Notes ” of B. H Suits [1]. Table 1 below will show the realistic frequency of notes. The frequency table is based on A 4 = 440Hz, from C0 to B8 and the middle column is the standard frequency of musical notes. But in programming, to use these frequency values, we should round to its nearest Integer.
I NT RODUCTI ON
This study will provide reader with more knowledge in mus ical sound , the theory of music and how mus ical notes are emitted. The major purpose is to apply the learned knowledge to create an entertainment dev ice. This project was ins pired by a Yubi de Piano project which is a toy from Japan. The device can be described as a glove that can emit piano-like music without the keyboard in real life. The music glove in this paper is improved with more musical notes played with the ADC sensor. The technique is here is detecting the change of frequencies of notes using the microcontroller and generating outp ut signal with corresp onding cy cle to produce a right n ote. The design includes three major blocks. The input block has sev eral butto ns to determine which musical no te is chos en, The main block has an MSP430G2553 microcontroller and the Output block has a small capacity buzzer. Some limits still pers ist such as the disto rtion of so und due to the lack o f a floating point MCU and limit of MIDI sound but. Howsoever, the Music Glove is sufficient to play several simple familiar music tracks. To design a music glove, understandings on acoustics – the s tudy of so und is necess ary. Understanding of the relation of musical notes and their frequencies is very important and requires s ome research. Moreover, the s olution for some problems, especially algorithm, is the most important thing, thus this paper will try to explain it meticulously. Also, the introduction of capacitive touch s enso r is also an interesting knowledge for tho se who are the techn ology‟s hobbies. II.
the knowledge of how the capacitive touch sensing work and how to co ntrol it mus t be s olved.
TABLE I.
FREQUENCY OF MUSICAL NOTES OC TA V E
U M B E R
0
1
2
3
4
5
6
7
8
C
16.35
32.70
65.41
130.81
261.63
523.25
1046.50
2093.00
4186.01
C#
17.32
34.65
69.30
138.59
277.18
554.37
1108.73
2217.46
4434.92
D
18.35
36.71
73.42
146.83
293.66
587.33
1174.66
2349.32
4698.64
D#
19.45
38.89
77.78
155.56
311.13
622.25
1244.51
2489.02
4978.03
E
20.60
41.20
82.41
164.81
329.63
659.26
1318.51
2637.02
5274.04
F
21.83
43.65
87.31
174.61
349.23
698.46
1396.91
2793.83
5587.65
F#
23.12
46.25
92.50
185.00
369.99
739.99
1479.98
2959.96
5919.91
G
24.50
49.00
98.00
196.00
392.00
783.99
1567.98
3135.96
6271.93
G#
25.96
51.91
103.83
207.65
415.30
830.61
1661.22
3322.44
6644.88
A
27.50
55.00
110.00
220.00
440.00
880.00
1760.00
3520.00
7040.00
A#
29.14
58.27
116.54
233.08
466.16
932.33
1864.66
3729.31
7458.62
B
30.87
61.74
123.47
246.94
493.88
987.77
1975.53
3951.07
7902.13
BASIC THEORY
There are two theories which are us ed in this project. First, the musical note theory, in this problem, the frequency of certain note, the formula to find certain note must be known. Second, the capacitive touch sensing theory, in this problem,
A musical octave spans a factor of two in frequency and there are twelve notes per octave. Notes are separated by the factor
⁄ or 1.059463.
Starting at any note the frequency to other notes may be calculated from its frequency by:
⁄
In this case, N is the number of notes away from the starting no te. N may b e pos itive, negative or zero. For example, staring at D (146.84Hz), the frequency to the next higher F is:
⁄
Since F is three not es above. The frequency o f A in the ne xt lower octave is: ⁄
III.
A LGORITHM
At first, the algorithm which use in this s tudy is describe as below. Because the most important thing in a project is find a suitable algorithm then make a code. If you don‟t have th e algorithm, you will not know how to do, where to begin with your project, then you will go astray to a maze without exit. A. Main algorithm The algorithm of music glove will show in Figure 2 at below.
The equation will work starting at any frequency but remember that th e N value for the starting frequency is zero. B. Capacitive Touch Sensing Capacitive touch sensing is a solution replacing mechanical buttons in several recent years. Capacitive touch sensing has wide application in recent year which defeat almost other touch product like resistive touch sensing. Capacitive touch sensors are using like an add-on in this device. Capacitive touch systems in general operate on the principle that the introduction of a human finger to an electrode adds a parallel capacitance to earth ground. The electrode is also influenced by the parasitic capacitances resulting from the internal GPIO pin of the MCU and from the capacitance between the electrode„s trace and its signal ground. Capacitive touch sensing is based on two major capacitance measurement methods: RO method (fixed gate time and variable electrode oscillation counts) and fRO method (fixed electrode oscillation counts and variable gate time) [2]. In this stu dy, RO method is used to d esign capacitive touch buttons . The RO method meas ures electrode capacitance by using a timer to establish a fixed window of time during which the electrode oscillates, a second timer counts the number of oscillation that occur within that fixed gate time. If a human interacts with the sensor„s electrode, the increase in capacitance causes the oscillation frequency to decrease. Figure 1 s hows th e principles of the RO method .
Figure 2. P rocessing of MCU Flowchart
Process of MCU is described as the diagram, from top to bot tom and has infinity loop unt il th e power has turn off or reset of MCU. First, the program define all variant value, predefined such as buttons or frequency of tone, define prototype of subfunctions, etc. Then, check the status of “main button”, if button is pressed, program will check ADC v alue then look up second table to emit certain note. If button is held, program will emit music which is stored. Else, program will check status of piano but ton s to emit certain note. B. Capacitive touch algo rithm Figure 3 describe the algorithm of capacitive touch which using in this study.
Figure 1. RO Measurement T iming Diagram
If a human interacts with the sensor„s electrode, the increase in capacitance causes the oscillation frequency to decrease as the RO method which is mentioned above.
At below is s ome s tandard notes with the ideal frequency of note and the ideal half cycle delay in microsecon ds [3].
TABLE II.
I DEAL P ARAMETERS OF STANDARD NOTES
From above, we have known how to emit a musical note and how to calculate time to interrupt time b etween high output and low outp ut. IV.
HARDWARE DESIGN
A. Overview block d iagram There is an input block interaction between user and mainboard to determine which note is chosen which have two choices to select: mus ic glove or capacitive piano keyboard. Figure 3. Capacitive Touch Flowchart
C. Musical frequency co ntrol Once y ou h ave d etermined the frequency for a certain no te, this is converted into a time period between half cycles. For example, the 220Hz note would correspond to 1/220 of a seco nd for a whole cycle, and 1/440 of a s econd for each half cycle. Doing the division results in a delay of 2.273 milliseconds for each half cycle. So, to emit A (220Hz), MCU program could do the following:
Drive the ou tput pin high
Wait for 2.273 milliseconds
Drive the outpu t pin low
Wait for another 2.273 milliseconds
Repeat as d esired until the tone sh ould no longer s ound
Mainboard has an ADC sensor and main button to receive command from user, and MCU handle buzzer to emit certain tone. The diagram of mainboard is briefly described as Figure 4 below:
Figure 4. System Block Diagram
B. PCB design The PCB of mainboard is designed by Layout Plus of OrCAD 10.5, and s how at figure below.
Figure 6. Bottom View
Piano buttons block is placed on fingers to press comfortably. If certain button is pressed, signal will transmit to mainboard via signal wire along each finger.
Figure 5. PCB Design
The dimension of PCB design is approximately 58 x 69 (mm) as above. There are 2 ports to connect with external piano but ton s. The mainboard is s mall, s o this is co mfortable for us er to put it on their hand. C. Design of glove The glove has five butto ns , one for each finger. The bu ttons connect to mainboard with wire. In the mainboard, there is an ADC sensor using adjustable resistor to increase or decrease frequency of mus ical note th en bas ed on it, the MCU select certain note to emit. There are seven notes of music but we just have five fingers, so the modification is necessary. Otherwise, the modification will make us confused when using it because of remembering where note is placed. First opinion, two buttons in hand to enough seven buttons for seven notes is thought but it will not comfortable for peo ple who us e. Therefore, one but ton for each finge r is chos en although there is a lot of problem when us ing it. The intuitive des ign of th is project has briefly described at the below figures.
Figure 7. T op View
Mainboard has ADC sensor and main button to receive command from user, and MCU handle buzzer to emit certain tone. The disadv antages of music glove make the development is important. Therefore, the production of capacitive touch boo sterpack is ess ential. D. Capacitive Touch BoosterPack Capacitive touch is an add-on which can plug into mainboard to play music like playing piano in real. This boo sterpack is more comfortable than the g love becaus e it has seven touchpad, corresponding with seven musical notes.
Figure 8. Capacitive Touch BoosterPack
V.
TEST & R ESULT
code. Remember, focusing on the algorithm and the diagram is the mos t important key to s olve the solution. In order to make a su ccess ful project, th e following points are advice to you: Limit the amount of hardware that you will have to build or spen d time cus tomizing. Attempt to buy components, i.e. a microcontroller, with most of the peripheral you will need. Make basic design decisions and immediately build proto typ es o f thos e su bs ys tems and interface. Don‟t worry about speed! Make your main focus to build a project and remember the d eadline. Save all your code! Either move it to a different file or comment it out . You will most likely right s everal versions o f a code, attempting sev eral ways to s olve the same problem. You will want to record your evolution of design. A CKNOWLEDGEMENT This st udy has referrenced from many sou rce, we than k to their hard work when researching. The list below is the auth or and the documents which us ed in this study . R EFERENCES
Figure 9. Mainboard with Capacitive Touch BoosterPack [1]
The Happy birthday song is tested with device, sound seems very good.
[2] [3]
Figure 10. Happy birthday song
VI.
CONCLUSIONS & FUTURE W ORK
After the design of glove, we have detected that we can expand this study, combine mainboard with capacitive touch sensors to create a completely new piano like a real piano. So, we decide to design a Capacitive touch BoosterPack that can plug into mainboard to create a touch piano . Then we develop it. We think in t he future, this st udy will be dev eloped more by peo ple who care. An effective des ign mus t meet thes e requirements : 1. 2. 3.
Easy to us e Playing various mus ical notes Can adjust wide frequency range VII.
R ECOMMENDATIONS
Our overall method of solution was to build subsystems then integrate. The first thing to be is write down your ideal to pap er, then find the so lution to so lve the problem we have. After all of that, we design the hardware and write down the
B. H. Suits, P hysics Department, Michigan T echnolo gical Un iversity , “ Ph ysics of Music - Notes” copyright 1998-2013. [Online]. Available: http://www.phy.mtu.edu/~suits/NoteFreqCalcs.html “ Capacitive T ouch Sensing” SLAA574, Jan 2013. [Online]. Available: http://www.ti.com/lit/an/slaa574/slaa574.pdf M. Eric Carr, “ Making music with microcontrollers” and “ Musical Note Frequencies” A th read of blog from Mar 28 2012 . [Online]. Available: http://www.paleotechnologist.net/?p=2253
Neural-Network Controller for Mobile Robot Thanh-Hoan Nguyen Department of Automatic Control Ho Chi Minh City University of technology
[email protected]
Abstract - This paper proposes a method using infrared sensor for path-planning and obstacle avoidance with neuron network on a micro mouse robot from Pay It Forward Club in uncertain environments. In order to acquire the information about the environment around the robot, the infrared sensor is mounted on the front of mobile robot. The neuron network with preprocessed input from infrared sensor readings will inform the micro mouse about the situation of environment where the robot is in at the moment. Then, according to the class of situation, a Labview program will calculate the resulting direction of the robot using neuron network. The data is transmitted from the robot and the computer Labview via Modules Radio Frequency (RF module). The neuron network is trained by the results of simulation and training will apply to the real object which is the micro mouse robot. This article developed from a sample exercise from Labview_ Artificial Neuron Network (ANN). Neural network algorithm used Echo State Networks (ESNs) to train. In addition, the project combines Samsung Android phone to collect data from accelerometer sensor and camera. The data is transmitted via wifi (IP/TCP) for Labview display and control. When the mobile robot reached the destination, the program uses Kinect to detect manipulation of human hands to pick up objects simulation. Keywords- Ar tif icial N euron Network , Mean Square Err or, Echo state network, Computational I ntelligence, M achine L earn ing.
I.
I NTRODUCTION
Robot Trajectory Tracking on the subject has been a lot of research and various applications in the world. I refer to some articles by some researchers. I noticed in report of authors Turki Y. Abdalla and Abdulkareem. A. A, use the algorithm PSO-PID (ref [3]), two authors have successfully implemented mobile robot tracking when applying statistical models to optimize the PID parameters. This PID relatively good results in short time for small MSE value. Some other authors use Neural Network Predictive trajectory tracking of an Autonomous Two-Wheeled Mobile Robot (Martin Seyr, Stefan Jakubek, Gregor Novak) (ref [4]). But in all reports there are some restrictions which the author stated as: not predictable when the input values are beyond the control domain, large amount of computation. In this paper, with a small research, I proposed algorithm micro mouse robot control by neural network based on Labview that I developed as a reference in section 6 _ reference. Robot will be training with the trajectory determined. The advantage of the method is that we do not need to specify the equations of the robot, and the robot can be customized in different environments. So good to meet with small errors are evaluated according to values and orientation coordinates. Besides I use some new methods of control used
as a remote control and wireless RF module, combined with the simulation process. In this topic, I presented my training algorithm, which is applied Echo State Networks (ESNs). You can refer to ESN in the article as in reference [5], [6], [7], [8]. Echo state networks (ESNs) were proposed as a cheap and fast architectural and supervised learning scheme and are therefore suggested to be useful in solving real problems. ESN was first applied to the Liquid State Machines (LSMS). The basic idea of ESNs is shared with Liquid State Machines (LSM), which were developed independently from and simultaneously with ESNs by Wolfgang Maass (Maass W., Natschlaeger T., Markram H. 2002). ESNs are a type of recurrent neural network that is easier to train than conventional recurrent neural networks. Those conventional networks require gradient-based learning algorithms, such as back propagation through time (BPTT), and can have problems with convergence. In order to circumvent this problem, the recurrent connections are not trained at all in the ESN approach. Instead, the recurrent n eural network is used as a reservoir of non-linear combinations of the input data, and this reservoir is used to train a simple perceptron output node with regression. The resulting network gives especially impressive results in timeseries prediction. II.
CONSTRUCTION
Artificial Neuron Network (ANN) simulation demonstrates how to design and validate control algorithms using the robotics simulator in the Labview Robotics Module. The trajectory is tracked by applying an artificial neuron network (ANN) control algorithm to the simulated NI Starter Kit 1.0. Trained neuron network allows the robot to follow a given trajectory with small errors, and can make for learning path. The infrared sensors are mainly used to avoid obstacles along the way. Thread doesn’t implement the algorithm used to find paths that only training. The input values are: two-wheel velocity (left, right) (v_l, v_r), vehicle speed and direction (v, w) on the robot. ANN output is the velocity of the robot's two wheels. Two-wheel velocity values are transmitted from the robot via RF module, robot speed and robot direction is calculated from the acceleration sensor on the Samsung phone. The velocity output is passed back to the robot via RF module. At the same time the measured values from the infrared sensor system has been updated, the discovery of the robot obstacles in time to change direction.
Pref Trajectory
(Vt, Vr)
(v, w) Velocity Controll
Inverse Kinematic
Mouse Robot
P Figure 2.1 shows the control diagram.
Where P_ref is the reference or desired posture (x, y, orientation) and P is the current posture of the robot. The velocity controller outputs the required linear and angular velocity (v, w) according to the difference between P_ref and P. Then through the inverse-kinematic model, (v, w) is converted to velocities (v_l, v_r ) of the robot’s left and right wheels.
This VI interpolates between positions in a path made up of (x, y) positions. Method specifies the interpolation method. By default, this exam ple uses cubic Hermite method. “path” specifies an array of positions (x, y). “ntimes” specifies the interpolation locations between every (x, y) element. Interpolation between elements repeats ntimes. “Trajectory” returns an array of (x, y, O, ds) values resulting from interpolation, where O is the desired. Orientation at position (x, y), and ds is the distance from the previous point to the current point.
Figure 2.2 Diagram prog ram neuron network control
Finally, (v_l, v_r ) are inputs to the simulator which applies the velocities to the mouse robot. In this example, the velocity controller uses a classical tracking algorithm to control the robot so the robot can get to the reference posture smoothly. This example also provides the following two inverse kinematic models. • Analytical function model • Artificial neuron network (ANN) model III.
MODELING & SIMULATION
The following is a preliminary description of the program and block neuronal network.
Figure 3.1 Utility_GenerateTrajectory.vi
Figure 3.2 ANN_VirtualVehiclePosture.vi
This VI returns the reference postures (x, y, O) required to travel trajectory. You can view the execution of this VI as representing a virtual vehicle travelling the trajectory at a velocity of linear velocity. Trajectory is array of (x, y, O, ds), where O is the desired orientation at position (x, y), and ds is the distance from the last point to current point. Linear velocity specifies the desired linear velocity of the virtual vehicle. dt specifies the time interval at which to calculate the posture. Virtual posture returns the posture (x, y, O) of the virtual vehicle. Velocity specifies the real linear velocity of the virtual vehicle. Angular velocity specifies the real angular velocity of the virtual vehicle.
IV.
MEASUREMENT AND DATA TRANSMISSION
Request Figure 3.3 ANN_GetPosture.vi
This VI returns the current posture information (x, y, O) of the mouse robot from the simulator. In real-world applications, you can acquire posture information from sensors. Position specifies the three dimensions (x, y, z) of the mouse robot’s position. Quaternion specifies the quaternion of the mouse robot. Vehicle status specifies the posture (x, y, O). The position input specifies th e position (x, y) and the quaternion input specifies the orientation O.
Measurement Application
Android Application
Labview Web Service Response
1. Measurement Application - Performs your measurement and publishes the latest data value to a variable. 2. Labview Web Service - Retrieves the latest data value in the variable and returns it. 3. Android Application - Periodically calls the web service and plots the latest data value. Measurement Application Labview VI's can share data between other VI's using shared variables. In this example, we perform a single point acquisition, plot the point to a chart, and publish it to the shared variable.
Figure 3.4 ANN_4WheeledController.vi
This VI returns the linear velocities of the left and right wheels. ESN trained specifies the input weight, feedback weight, internal weight, and output weight. “model” type specifies which inverse kinematic model to use. You can specify the analytical kinematic model or ANN kinematic model options. Current posture specifies the current posture (x, y, O), of the mouse robot, where O is the orientation. Desired posture specifies the reference posture (x, y, O), of the mouse robot, where O is the orientation. vref (nan) specifies the reference linear velocity. If this input is not wired, the default value is NaN, which means there is no input of reference linear velocity, and the reference linear velocity will be derived from desired posture. wref (nan) specifies the reference angular velocity. If this input is not wired, the default value is NaN, which means there is no input of reference angular velocity, and the reference angular velocity will be derived from desired posture. dt specifies the time interval. wl returns the linear velocity of the left wheels. wr returns the linear velocity of the right wheels. Other some sub vi use in training n euron network: a. Train_Artifical Neuron Network.vi Neuron network diagram with signal put into include: User setting ESN, data from sensors. Through neuron network was trained to output the results used to control and display. b. Train_GenerateVelocityofLeftRightWheel.vi Signal generator of two-wheel speed inputs used for training neuron networks. c. Train_GetDataForTraining.vi Signal generator speed and angle velocities used for training neuron networks
Figure 4.1 Loop DAQ Assistant Measu rement
Labview Web Services Labview Web Services allow any web capable device to access data from your application through any web interface. Your measurement application continually runs in parallel with your web service VI. When the web service VI is called by a web client, it reads the value from the shared variable, generates an XML string, and writes a HTTP response with the data back to the web client.
Figure 4.2 Setup Web Services on Lab view
Android Application
This code snippet contains the core functionality. It first sets up a connection to the URL, configures the XML parser, reads and parses the XML from the input stream, adds the data value to an array of all data values received, and plots the array of values on the chart.
Step 3: Compute output weights. Compute the output weights as the linear regression weights of the teacher outputs y(n) on the reservoir states x(n) . Use these weights to create reservoir-to-output connections (dotted arrows in Figure 1). The training is now completed and the ESN ready for use. Figure 2 shows the output signal obtained when the trained ESN was driven with the slow step input shown in the same figure.
Figure 4.3 GUI application android
V.
CONTROLLER
Echo state networks (ESN) provide an architecture and supervised learning principle for recurrent neural networks (RNNs). The main idea is (i) to drive a random, large, fixed recurrent neural network with the input signal, thereby inducing in each neuron within this "reservoir" network a nonlinear response signal, and (ii) combine a desired output signal by a trainable linear combination of all of these response signals. In the ESN approach, this task is solved by the following steps. Step 1: Provide a random RNN. (i) Create a random dynamical reservoir RNN, using any neuron model (in the frequency generator demo example, non-spiking leaky integrator neurons were used). The reservoir size N is taskdependent. Step 2: Harvest reservoir states. Drive the dynamical reservoir with the training data D for times n=1,…,nmax . In the demo example, where there are output-to-reservoir feedback connections, this means to write both the input u(n) into the input unit and the teacher output y(n) into the output unit ("teacher forcing"). In tasks without output feedback, the reservoir is driven by the input u(n) only. This results in a sequence x(n) of N-dimensional reservoir states. Each component signal x(n) is a nonlinear transform of the driving input. In the demo, each x(n) is an individual mixture of both the slow step input signal an d the fast output sinewave.
Figure 5.1: The basic schema of an ESN, illustrated with a tuneable frequency generator task. Solid ar rows indicate fixed, random connections dotted arrows trainable connections.
In this report is System equations. That basic discrete-time, sigmoid-unit echo state network with N reservoir units, K inputs and L outputs is governed by th e state update equation in fb x(n+1) = f ( Wx(n) + W u(n+1) + W y(n) ) , (1.1) where x(n) is the N -dimensional reservoir state, f is a sigmoid function, W is the N × N reservoir weight matrix, Win is the N × K input weight matrix, u(n) is the K dimensional input fb signal, W is the N × L output feedback matrix, and y(n) is the L-dimensional output signal. In tasks where no output feedback is required, W fb is nulled. The extended system state z(n)=[ x(n);u(n)] at time n is the concatenation of the reservoir and input states. y(n)= g (Wout z(n)) , (1.2) where g is an output activation function (a sigmoid) and out W is a L×( K + N )-dimensional matrix of output weights. Program control using neuron network with the ESN input data from the sensor (2 wheel speed, vehicle speed, vehicle direction) and matrix values are set orbit. Neuron network consists of two layers, the trigger is pressed sigmoid content, the layers have sigmoid activation function. The number of hidden layer neurons can be custom set.
output
State collection
input
Desired output In train phase? (F) Error in
Error out
Figure 5.2 ESN_network.vi Block Diagram
VI.
EXPERIMENTAL R ESULTS:
The following is a simulation model on Labview.
Figure 6.1 3D simulation
And here are the results after training on simulated robot
Figure 6.2 Training result .
Speed of left and right wheels: (v_l, v_r ). Velocity and angular velocity of robot: (v, w). Step 1- Data acquisition. Drive th e robot wheels with input (v_l, v_r ) and then collect the actual (v, w). Step 2 – Training. Input (v, w) to ANN and train ANN to output the optimized (v_ld, v_rd ).
The graph shows the fitness between the optimized (v_ld , v_rd ) and the original (v_l, v_r ) input. Results using neuron n etwork robotics controller
The success of the project is to be applied on the control using neuron networks in robot learning Labview path, and the sensors collect data successfully magic from android app to Labview to calculate. R EFERENCES [1] National Instruments: Test, Measurement, and Embedded Systems Labview tutorial http://www.ni.com/community/ .
Figure 6.2 Result of robot experiment .
The robot tracks the path at the velocity you specify. Posture information ò the robot í ( x, y, direction). At time T, the robot knows only the next point in the path rather than the entire path. This might cause an increase in error when th robor í at corners. VII.
SUMMARY
Project is limited to algorithms and error results yet to achieve the desired small with using ESN. And not fulfill gripping hand controller using the Kinect gesture recognition.
[2]
Dr. Huynh Thai Hoang, Textbooks Intelligent Control , HCM City University of technology.
[3]
Turki Y. Abdalla (Department of Computer Engineering University of Basrah, Iraq), Abdulkareem. A. A (Department of Electrical Engineering University of Basrah, Iraq), PSO-based Optimum Design of PID Controller for Mobile Robot Trajectory Tracking.
[4]
Martin Seyr, Stefan Jakubek, Gregor Novak; Neural Network predictive trajectory tracking of an Autonomous Two-Wheeled Mobile Robot ; Institute of Mechanics and Mechatronics, Div. of Control and Process Automation, TU Vienna Institute for Computer Technology, TU Vienna
[5]
Mantas Lukosevicius, Herbert Jaeger, Reservoir Computing Approaches to Recurrent Neural Network Training , School of Engineering and Science, Jacobs University Bremen gGmbH, P.O. Box 750 561, 28725 Bremen, Germany.
[6]
Rikke Amilde Lovlid, A Novel Method for Echo State Network Security Training with Feedback-Error Learning , Department of Computer and Information Science, Norwegian University of Science and Technology, Sem Saelands vei 7-9, 7491 Trondheim, Norway.
[7]
Jurgen Schmidhuber, Daan Wierstra, Matteo Gagliolo, Faustino Gomez, Training recurrent Networks by Evolino, IDSIA, Galleria 2, 6928 Manno (Lugano), Switzerland; TU Munich, Boltzmannstr. 3, 85 748 Garching, Munchen, Germany.
[8]
Herbert Jaeger, Jacobs University Bremen, Bremen, Germany , Echo state network , http://www.scholarpedia.org/article/Echo_state_network
[9]
Dr. Robert W. McLaren, Thesis Supervisor, Developing Neural Network Applications using Labview, A Thesis presented to the faculty of the Graduate School University of Missouri-Columbia.
A Three-phase Grid-connected Photovoltaic System with Power Factor Regulation Tien-Manh Nguyen, Minh-Huy Nguyen, Minh-Phuong Le Faculty of Electrical & Electronic Engineering Ho Chi Minh City University of Technology Abstract — This paper presents the control model of three phase
grid connected photovoltaic generation system with a new approach for power factor regulation. The model contains a detailed representation of the main components of a system with several solar panels, DC/DC converter, DC-link, a grid side three phase voltage source inverte r (VSI) and output filters to reduce harmonic distortion of line current. In this paper, a complex control scheme including two PI controllers cooperated with MPPT is proposed to stabilize the DC voltage. A three -phase grid-connected voltage source inverter was synchronized to the grid by a robust phase-locked loop (PLL). The new approach of the power factor control scheme is also proposed. The model is simulated in Matlab - S imulink Toolbox and imple mented usi ng DSP TMS320F2812. Simulation and experimental results show the high stability and high efficiency of this three-phase gridconnected PV system. It also proves the excellent performance control units as improving a flexible regulation of power factor from 0.5 to 1.
(1) DC-DC unit: DC boost converter is us ed to boost the PV array vo ltage and track the maximu m so lar power. (2) DC-AC unit: The three phase inverter with bridge topology conv erts DC to s inusoidal AC and provides to load. Filter circuit: AC-side filter is composed of R-L-C, used to filter AC side current harmonics to ensure the quality of grid current.
Figure 1. Proposed contr ol structure of th ree phase grid-connect ed PV system
Keywords — thr ee phase Gri d-conn ected i nvert er; maxi mum power poi nt trackin g (M PPT); photovol taic; sol ar energy.
I.
NT RODUCTI ON
Nowadays fossil fuel is the main energy su pply of the worldwide economy, but the recognition of its role as a major caus e of environmental problems urges mankind to look for alternative resources in power generation [1]. The control strategies applied to distributed systems has received a h igh level of interest. Imp roving the performance of grid inverter and increasing the switching frequency, power density to meet power quality requirements have become a research ho tsp ot in recent years [2]. Many researches has focused to resolve important problems of the d istributed s ys tem and its control. In this paper, authors focus on two main problems, as a control of dc-link voltage and a control of load power factor. This paper also propos es an improved MPPT cont rol a lgorith m for photovoltaic under rapidly changing solar radiation. The simulation results on Matlab – Simulink show that detection MPPT quickly in conditions of changes in radiation and temperature. The s imu lation and e xpe riment resu lts sho w good response, high stability and high efficiency of this three-phase grid-connected PV system ability when the grid voltage and load changes. The main circuit block diagram of the PV grid connected system is shown in Figure 2, including:
Figure 2. General diagram of grid conn ected pho to volt aic system
II.
PROPOSED CONTROL STRUCTURE FOR GRID CONVERTER
The proposed control structure of three-phase gridconnected PV system consists of PV panels is presented in Figure 2 can be divided into 2 parts: MPPT control and Grid tie control. MPPT algorithm sa mples Ipv and Vpv fro m solar panels to keep them working at Maximum Power Point while output voltage of DC/DC co nverter lets floating. DC Link controller provides reference current Id* which relate to power from solar panels to keep DC Link at fix DC voltage. Grid connected controller is a current controller using SVPWM algorithm in synchronous dq frame. The grid tie
power factor can adjus t b y setting Iq = 0 when cos(φ) = 1 or Id/Iq = const when cos (φ) < 1. Vpv, Ipv
Vpv*
+
MPPT
Compare
PI
≥
-
?
-
Vpv
Figure 3. Block diagram of PV DC-DC converter control part
A.
The control scheme for DC/DC un it The PV voltage control is illustrated in Figure 3, and the objective is to follow maximu m po wer point. The PI controller keeps PV at Vpv* which provides by MPP tracking algorithm. The coefficients of PI are defined based on the DC/DC converter switching frequency 20kHz. The DC/DC converter is a pus h pull step u p converter. The pulse transforme r has primary turns Np, secon dary turns Ns, input voltage Vin and output voltage Vt. The LC filter is output voltage filter which model is shown in figure 4.
Figure 4. DC/DC model
Differential equations are est ablish ed thanks to Kirchhoff's law by equations (1)-(4)
2
Vt di L dt
N S N P
R Rc L dvc dt
D V i
Vc L
i L C
Vt L
(1) Rc L
io
(2)
io
(3)
C
(4) vo RciL vc Rcio Assume DC/DC pulse transformer ratio Ns/Np = 16, L filter = 2mH, C = 220uF, r = 0.2 , rc = 0.1 , the model is linear in terms of the load current var ies from 0 to 6A, transfer function at fix o utp ut current of DC/DC converter is:
H
V 0 Vi
3820 s 1.738*10
(5)
The open loop s ys tem H has low phas e margin (170 ). PI controller G PI
40 s 1200 s
is designed so that the sys tem can 0
The dq control structure is normally associated with PI controllers since they have a satisfactory behavior when regulating dc variables. Since the co ntrolled current has to be in phas e with the grid voltage, the phase angle used by the abc to dq transformation modu le has to be extracted from th e grid voltages. As a solution, filtering of the grid voltages and using arctangent function to extract the phase angle can be a possibility [3],[4],[5]. In addition, the Phase-Looked Loop (PLL) technique became a state of the art in extracting the phase angle of the grid voltages. Moreover, to control power factor the reactive power control loop is presented in the structure, where the actual power factor (PF) is calculated using the conven tional instantaneo us power definition in „abc‟ sys tems. The Incremental conductance algorithm for MMPT is explained by equations (6),(7),(8) is and simulated MatlabSimu link. This PV sy stem inc ludes 20 solar panels get into t wo paralle l rows; each row consists of 10 pan els. The rated parameter o f each panel is P n = 50Wp, an open-circuit voltage Voc = 21.42 V, and short-circuit current Isc = 3.11A. Figure 6 shows the characteristics P – V of the system and it‟s MPP corresponding to the different s olar rad iation (from 0.2kW/m2 to 1kW/m2), which are found by proposed Incremental conductance. The differential of power (dP) is expres sed following: dP dV 0 dP 0 dV dP dV 0
At
MPPT
Left of Right of
MPPT MPPT
Where
8
s 2 30 s 454501
Figure 5. Bode float of DC/DC after adjusting
have a high phas e margin (>60 ) to achieve fast resp ond and low overshoot. The DC Link voltage is controlled accordingly to th e necessary outpu t power. Its outp ut is the reference for the active current controller and the reactive current mus t be impos ed to the sy stem to con trol the power factor.
dP dV
d ( IV ) dV
(6)
I V
dI dV
I V
I V
(7)
Replace (2) to (1), we can get:
dI I dV V dI I V dV dI I dV V
At
MPPT
Left of Right of
MPPT MPPT
8
P-V Characteristic
L
r
i
u
v
) W ( r e w o P
Figure 8. Grid tie model.
Voltage (V)
Figure 6. P-V characterist ic of P V system.
Figure 7 shows the output power of system when solar isolation is 1kW/m2 then it reduces till 0.2kW/m2 and output DC voltage of the s olar sy stem.
udq t
) V ( e g a t l o V ) W ( r e w o P
wdq t udq (t )
1.5T s
wdq t udq (t ) * f s 800wdq t 800u dq t 1.5
(12)
With odq t udq (t ) we have states s pace equations : od 800 0 od 800 0 wd (13) o q 0 800 oq 0 800 wq ud od uq oq Co mbile to s tates space of 3 phas es current filter, the st ates space of grid t ie s yst em sh own bellow (14)
Figure 7. Simulation of tracking process (Step reduce of insolation )
Simulation results show that the system can remain stable in case of a 40% step change of isolation. The h igh s tability of the MPPT method will also ensure the high efficiency of the system. B. Dc-Link Control The DC voltage controller is s hown in f igure 3 and us ed to produce the reference current value for the cur rent controlle r. Its aim is to keep the voltage constant on the DC side in any cases. The DC lin k voltage control is changed according to the balance of po wer exchan ged by the con verter. The PI controller is us ed for the DC voltage an d its o utput is feed-forwarded to the output of PF controller to obtain the reference for the active current i d and the reactive current i q . Thus the controller parameter can be obtain by principle "Optimum symmetrical". C.
Space vector switching is assumed by a delay transfer 1 function: G . PWM ( s ) 1 1.5T ss wd, wq are 2 input (dq) of SVPWM. System has s a mpling period at 1200Hz, r = 3 Ω, L = 12mH, ɷ = 100π (50Hz): 1 (10) udq (s ) w dq (s ) 1 1.5T ss (11) udq t 1.5Ts udq (t ) wdq t
id 0 0 0 vd 250 100 83.33 0 id 83.33 d iq 100 250 0 83.33 iq 0 83.33 0 0 vq dt 0 0 800 0 wd 800 0 od 0 od 0 o 0 0 0 0 0 800 wq 800 oq 0 q
id yd 1 0 0 0 iq y q 0 1 0 0 o d o q
The state space model of grid tie inverter is sho wn in figure 9. The PI currents controller was cho sen b y using loop s haping theory for the singular values plot of shaped loop have to have high gain at low frequency (5Hz) and low gain at high frequen cy (500Hz), the s ingular plot o f target system is sh own in figure 10.
Grid tie and Power Factor Control
Grid tie controller includes 2 loops: Inner control loop is AC currents output controller, external control loop is DC bus voltage controller which is shown before. Assume that u, i, v are inverter outp ut voltage sp ace vector, grid tie currents s pace vector and grid space vector, ud, uq, id, iq, vd, vq are projections of u, i, v on dq frame. We have:
r ω 1 L id L r i dt i q ω q 0 L d i d
0
u d vd 1 u q vq L
(9) Figure 9. Grid tie States space model.
Figure 10: Singular values plot after adjusting
The power factor control scheme is shown in figure 11, where the actual powers P and Q are calculated using the conventional instantaneous power definition in „abc‟ systems by (4) and (5). P va ia vbib vc ic
Q
1 3
( vbc ia
Vd I d
III. SIMULATION OF THE PROPOSED CONTROL SCHEME The proposed control scheme is implemented in MATLAB/SIMULINK Toolbox and shown in Figure 13. The simulation results demonstrate the excellent performance of the proposed control scheme. In this model, the DC/DC converter combines MPPT algorithm can boost voltage up to 600V and three phase grid connected DC/AC inverter, a rated grid parameters are 380V, 50Hz. This PV system includes 20 solar panels get into two paralle l rows; each row consists of 10 pan els. The rated parameter o f each panel is P n = 50Wp, an open-circuit voltage Voc = 21.42 V, and short-circuit current Isc = 3.11A. Simulation results are presented in Figure 10, show the system is con nected to t he grid at time t = 0.27s and are always kept s table even when radiation decreased 40%.
(4)
vca ib vab ic ) Vd Iq
(5)
Id_ref Vdc_ref
P*
Idc
P Control
+
PI
-
+
-
Id*
P P,Q Calculation P* tan
tang_ref
Q*
+
-
Q Control
Iq*
PI
Figure 11. Power factor control structure
D. Three Phase PLL S tructure The PLL is used in order to determine the phase angle θ and the frequency of the grid and in this paper, the conventional synchronous reference frame PLL. The block diagram of the used PLL is further shown in Figure 12. This structure of PLL consists of two major parts, the phase detection and the loop filter. As it can b e observed that is used a PI controller in order to reduce the error between the reference an d measu red values of Vq .
wff
Loop Filter
+
+ Ud*
-
VOC
+ w
q
1/S
PI Ud Uq
Va
dq
ab
Figure 12. Contro l structure of PLL
αβ abc
Vb Vc
Figure 13. Simulation waveforms of the proposed three-phase gridconnected PV system.
IV. HARDWARE IMPLEMENTATION USING DSP 2812 Based on the earlier theoretical analysis, the experimental system was designed and implemented on DSP TMS320LF2812. It‟s installed at Power Electronics Research Lab, Ho Chi Minh city University of Technology. Experimental results are measured by using Tektronix TDS2024B oscilloscope and Fluke 345 Power Quality Clamp Meter. The PV system includes 5 PV panel 50 Wp in series and connects to 50 - 120V three phase grid via the three p hase variable transformers. Figure 14 presents photographs of the inverter and PV panels in the propos ed PV sy stem that was installed at Electronics Research Lab, Ho Chi Minh city University, Viet Nam.
Figure 16: Experiment al waveform s of vo ltages and current s (PF=0.5)
Figure 14: Experimental models
The experiment is implemented in the actual conditions of Vietnamese climate with the temperature 30°C for the three following case stu dies. A.
Case study 1:
The Power factor (PF) is controlled and given 1. The waveforms of voltage and cu rrents are sho wn in Figure 15.
V. CONCLUSION The implementation of a three-phase grid-connected PV system is presented in this paper. The control approaches are applied in the system can remarkab ly imp rove s ystem stability during rapidly changing process of insolation. Due to its improve ment o n the dynamic respo nse, the DC link voltage is kept almos t cons tant, it allow the inverter to s ynchronize to the grid and stabilize the system even when an insolation is reduced till 40%. After a step change of insolation, the controller can maintain the dc-link voltage and keep it close to the MPP. Power factor of the system can be controlled, for the interval (0.5-1). The actual values of PF are very close to the reference. It indicates the efficiency of propo sed control system. R EFERENCES [1]. J. C. Schaefer, “Review of photovoltaic power plant performance and economics,” IEEE Trans. Energy Convers., vol. 5, no. 2, pp. 232 – 238, Jun. 1990. [2]. E. V. Solodovnik, S. Liu, and R. A. Dougal, “Power controller design for maximum power tracking in solar installations,” IEEE Trans. Power Electron., vol. 19 , no. 5, pp. 1295 – 1304, Sep. 20 04. [3]. F. Blaabjerg, R. Teodorescu, M. Liserre and A. Timbus “Overview of control and grid synchronization for distributed power generation systems” IEEE Transactions on Industrial Electronics, Vol. 53, No. 5, pages 1398 -1409, 2006 [4]. A. Timbus, M. Liserre, R. Teodorescu, P. Rodriguez, and F. Blaabje rg “Linear and nonlinear control of distributed power generation systems” Pr oceedings of I AS'06, pages 1015-1023, 20 06 [5]. A. Timbus, M. Liserre, R. Teodorescu, P. Rodriguez, and F. Blaabje rg “PLL algorithm for power generation systems robust” to grid voltage faults” Pr oceedings of PESC'06, pages 1-7, 2006 [6]. A. Lohner, T. Meyer, and A. Nagel, “ A new panels-integratable inverter concept for grid connected photovoltaic systems,” in Proc. IEEE Int. Symp. Ind. Elect ron., Warsaw, Poland, vol. 2, Jun. 1 7 – 20, 1996, p p. 827 – 831.
Figure 15: Waveforms of voltages and currents.
B.
Case study 2:
The Power factor (PF) is controlled and given 0.5. The waveforms of voltage and cu rrents are sho wn in Figure 16.
[7]. H. Akagi, Y. Kanazawa, and A. Nabae, “Instantaneous reactive power compensators comprising switching devices without energy storage components,” IEEE Trans. In d. Appl., vol. IA-20, no. 3, pp. 625 – 30, May/Jun. 1984.
[8]. Wu Libo, Zhao Zhengming, Senior Member, IEEE, and Liu Jianzheng, “A Single-Stage Three Phase Grid-Connected Photovoltaic System With Modified MPPT Method and Reactive Power Compensation”. IEEE T RANSACT IONS ON ENERGY CONVERSION, VOL. 22, NO. 4, DECEMBER 200 7
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
A Modified Flood Fill Algorithm for Multi-destination Maze Solving Problem Dinh-Huan Nguyen
Hong-Hiep Nghiem
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh city, Vietnam
[email protected]
Department of Automatic Control Faculty of Electrical and Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh city, Vietnam
[email protected]
Trémaux is a more advanced algorithm; it was invented by French mathematician Charles Pierre Trémaux. It is a type of depth first search, robot explores as far as possible along a path until it reaches a dead end or goal, at that time it will backtrack. This algorithm guarantees to find a solution but it costs time and memory since it doesn’t utilize informa information tion about end point location and robot has to save a data structure to represent the maze in its memory. Trémaux flow chart is shown in figure 1.
Abstract — This paper discusses a method to build a robot that can solve multi-destination maze. Performances of some algorithms for the robot that has no prior knowledge of the maze were compared using a simulator and the most efficient one was chosen. We modified an open source micro mouse simulator to test three algorithms: Wall following, Trémaux and Flood fill algorithms. The Flood fill algorithm performs significantly better than the others and it can be extended to solve multi-goal problem effectively. Finally, the modified Flood fill algorithm is implemented in real hardware. The result proved the efficiency of this algorithm as well as the implementation.
Start
Keywords m aze solving; solving; fl ood ood f il l; multi -des -destination. — maze
I.
Create an array to save backtrack direction
I NTRODUCTION
Maze solving is such a challenging and interesting problem that there are many maze solving contests held annually throughout the world, for example: All Japan Micro mouse Contest; IEEE region 6 Micro mouse Competition at University of California, San Diego; Singapore Robotic Games, etc. Maze solving is an classical problem where robot has to find out a way or even the shortest way to go from a start point to an end point in the maze. Maze solving algorithm is categorized in two groups: algorithms for solving given maze and those for solving ungiven maze. Some algorithms of this class are Wall following, Trémaux and Flood fill. In the summer of 2013 at HCMC University of Technology, the Club for Scientific Research of Faculty of Electrical and Electronics Engineering – Pay It Forward Club held a Micro mouse Contest named Raise Your Arm. The competition applied new rules that modifications for the classic algorithm had to be d one in order to to make the robot able to go to to many endpoints in the maze. In this paper, we will focus in finding an effective algorithm to solve the multi-destination problem. Also a robot model was built to test the algorithm.
N Backtrack
Are there any neighbor cells that robot has not set backtrack direction?
Y Choose a neighbor cell to go to and set backtrack direction for that cell
N Reach goal? Y End Fig. 1 Flow chart of Trémaux algorithm
II.
MAZE SOLVING ALGORITHMS
Flood-fill algorithm uses an identified end-point cell to number the others. The end-point will be marked as zero and the further a cell from this point is, the higher number it will be assigned. Every time robot passes a new cell, robot updates information about wall maze wall. Robot will try to go to neighbor cell that has smaller index than its current cell, in case it cannot find one, it will “flood fill” again in its memory base
A. Traditional Wall following is the simplest algorithm, robot will follow left or right wall until it reach the goal. This algorithm doesn’t guarantee to find a solution; in f act act it won’t give a solution if the goal is not in contact with outer maze wall.
140
on updated information. Flood Fill algorithm requires much amount of computation than the other two and also costs memory but it is more effective since it utilize goal’s location. To verify three algorithms, open source software [1] was modified. We add some functions to the simulator: start point, end point and start direction can be changed; also we can put many endpoints and robots in the maze. Scripting function is also modified to make it easier to script algorithms. The three algorithms were tested in the same maze with the same end point; the end point was put in the center of the maze or in different cell. The three algorithms are compared base on how many steps does it take to solve the maze.
With this maze, the mouse needs 246 steps (turn right: 28, turn left: 33, go forward: 185) to reach four end points, compare to 233 steps needed by Trémaux algorithm to reach the first goal. The simulator proves the correctness and efficiency efficiency of our proposed algorithm. Start
Numbering: goal is numbered zero; the farther the cell from the goal, the higher its index is.
Two tests were used: End point
Y
End point
Present cell has been passed?
N Start point
Start point
Update wall information
N Are there any neighbor cells that has smaller index than the current?
Fig. 2 Mazes used to compare three algorithms, maze with endpoint in the center is on the left, and the right side is the maze with end point in different cell.
Y Choose next direction
TABLE I. RESULT WHEN ENDPOINT IS IN THE CE NTER OF THE MAZE
Steps
Wall foll owing
Tré maux
Fl ood-Fil l
No solution
538
178
Numbering again with updated information
Reach goal? N
Y TABLE II. RESULT WHEN ENDPOINT IS IN DIFFERENT MAZE
Steps
Wall foll owing
Tré maux
Fl ood-Fil l
73
233
49
Y
N End
B. Modified Flood-Fill Algorithm Results presented in the previous section imply that FloodFill algorithm is the most efficient of the three. Flood-Fill algorithm costs memory and also requires much computation than the other two, but for small maze and with the power of MCU today, it’s not a problem. Because Flood Fill Fill algorithm requires saving information of passed cells, this algorithm can be extended to solve multi-goal problem effectively. The strategy is that when robot reaches a goal, flood fill algorithm will be applied again with the new start point which is the previous end point and the new end point which is the next end point. Wall-Following Wall-Following or Trémaux algorithm aren’t effectively to be used in the multi-destination problem problem since the robot robot has no information about maze wall. Using Flood Fill algorithm, when the robot reaches a goal, it already had some knowledge about the maze, therefore it will give better solution to reach the next goal than the other algorithms. Flow chart of propose algorithm was given in Fig. 3. A maze with four destinations used to test the Modified Flood Fill algorithm was shown in Fig. 2.
Are there any goal?
Fig. 3.
Flow chart of propose algorithm to solve multidestination maze
Fig.4.
Maze used to test modified Flood Fill algorithm
III.
size of 20x20 cm.
EXPERIMENT
A. Building experimental Micro Micro mouse robot
C. Micro Mouse Robot Robot Controller
Main control circuit is based on EK-TM4C123GXL Launchpad of Texas Instrument. Controller board includes signal buffer transceiver module, display module, communication port module, infrared emitters and detectors module(include one infrared emitter-detector pair to measure distance from left wall, one pair to measure distance from right wall and two pairs to measure distance from front wall), battery voltage sensing module, power supply module. Main power circuit includes DC-DC boost converter module and DC motor driver module.
PID controllers are used to control robot’s speed, position and distance from maze walls. PID controller is used because it’s easy to implement and suitable for system that the model is unknown. Velocity controller takes input from encoder and output of position controller and distance controller. Position controller’s inputs are set are set point from main control board and feedback signal from encoder. Distance controller takes input from IR sensor and compute input for velocity controller.
PC
Communication Communication block Main control block
Sensor
Data acquistation block
PID controller
Motor control block
Left motor
Velocity set point
PID controller
+-
Right motor
H bridge and motor
velocity Encoder
Power module
Fig 8
Velocity controller
Fig. 5 Robot block diagram
Position set point +-
PID controller
Velocity controller
position Encoder
Fig. 9
Position controller
Fi . 6 The ex ex erimen erimental tal MicroMicro-mou mouse se robot robot
B. Maze Field Maze field has the size of 11x11 cells and each cell has the
Distance set point
PID controller
+-
Velocity controller
distance IR sensor
Fig 10
Distance controller
D. Real Test We test two algorithms: wall following and flood fill in the maze shown in figure 11 with the real robot model. Fig.7.
Maze field
we need to try different sensors: ultrasonic sensor, digital compass…; we need to build smaller, faster robot and flood fill algorithm needs improvement to reduce the amount of computation. The flood-fill algorithm has been successfully applied to the simulation software and the real model of robot-and-maze. ACKNOWLEDGMENT The authors gratefully acknowledge the support of the Scientific Research Club (PIF), Faculty of Electrical and Electronic Engineering, HCM University of Technology R EFERENCES Fig 11
Maze used to test micromouse robot
IV.
R ESULT AND CONCLUSION
In this paper, the modify flood-fill algorithm is discussed in the detail, compared with some different algorithm. It is effectively to be used in the multi-destination maze. Using wall following algorithm, the robot cannot reach first goal while it has successful reached all 3 desired goals using flood fill algorithm. However, there is something that we need to improve in the future: IR sensors are easily affected by noise so
[1]
http://code.google.com/p/maze-solver/
[2]
A potential maze solving algorithm for a micromouse robot, L.WyardScott, Q.-H.M.Meng.
[3]
Tamer Mansour “PID Control, Implementation and Tuning”
[4]
Mohamed Sakr “Pid Control and Controller Tuning Techniques”
[5]
Micromouse engineering design competition “http://ieee.ucsd.edu/projects/micromouse/”
[6]
A miniature Robot controlled by a “http://www.csuchico.edu/ieee/micromouse.html”
[7]
http://www.astrolog.org/labyrnth/algrithm.htm#solve
[8]
http://journals.analysisofalgorithms.com/
created
by
IEEE
Microcontroller
SMS Registration using Digi Connect WAN Via TCP Socket Pham Ngoc Hoa Department of Automatic Control Faculty of Electrical Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh City, Vietnam
[email protected]
Abstract — In
this paper, we propose an application which uses computer collect SMS data. We use C# language with MS Visual Studio 2010 and Digi-connectWAN to receive the SMS data. Here, after Digi-connectWAN has received SMS data, it will send data to a computer and computer will filter and display the SMS data. Digi-connect-WAN is a Master which collect data and send data to slave. Computer is a slave which receives data. Here, the Digi will interface with computer via TCP suite. In computer, we programed a GUI which opens/closes port and wait for received data. The GUI also writes/reads data to Excel for storage.
I.
Truong Thanh Hien Department of Automatic Control Faculty of Electrical Electronics Engineering Ho Chi Minh city University of Technology Ho Chi Minh City, Vietnam
[email protected] With database we use it as same as a location which we can store and process in n etwork. Here, we use a host which is provide many tool can support us in connect database. In fact, many host have support Database management systems which permit we work with database of us and we can also send website to this host. Host will send us an address which everyone can connect to it. And MS 2010 also provide many tool which permit we work with database. This application is reliable and effective when we apply in class. II.
CONNECTION MODEL
I NTRODUCTION
In fact, register in class is complex and spend too many time. Therefore, we introduce an application which can resolve this. In this application, you can register at class very quickly via SMS cellphone. A device which collect all SMS and send to computer. At this, computer will save to excel and send to database in host. Beside with excel, we can execute many action such as: sort, search, check, count student number… Because data is enclosed in databas e in host and this help student can check comfortable their information. Quickly, effective, safety which this application al ways tend. Now, we have many method to SMS registration. Here, we use this model with safe and effective of Digi-connectWAN. Digi is a device with many feature of connection. Digi collect many SMS and blink collapse connection. Process rate of Digi is quickly and it can blink which many SMS received simultaneous. Besides Digi is complex in use. It have many feature and SMS collection is only one its application. Cost is expensive. They usually use it in in dustry. Here, we use computer such as heart of system. In this heart, we use tools to process data such collect string which is sent from Digi, filter SMS data, store in excel and send data to database. C# is a simple and comfortable program language which is support many tools to process and connect to device via TCP socket. MS 2010 also permit we can work with excel which is a tool of Microsoft. With excel master can use it as same as report to statistic and process data. Beside C# is only simple so we are very difficult in process with request best an d quickly when program process.
Figure 1. Connection model Digi is a device which collect SMS and send to PC. Digi use a SIM of Viettel with a phone number and all student carry out register with write SMS with a syntax. Then, they will send SMS to Master for handle via computer. Note: we must install Digi before use it because Digi have many different feature. A. Digi connect WAN Digi Connect WAN is a wireless WAN gateway. It provides high performance Ethernet to wireless communications through cellular GSM (Global System for Mobile communication) or CDMA (Code Division Multiple Access) networks for primary and backup connectivity to remote locations. It uses General Packet Radio Service (GPRS)/Enhanced Data Rates for GSM Evolution (EDGE) to offer an easy and cost-effective means of connecting virtually any remote location into the corporate IP network. It is ideal for use where wired networks (for example, leased line/frame relay, CSU/DSU, fractional T1) are not feasible or where alternative network connections are required.
The 2014 FEEE Student Research Conference (FEEE-SRC 2014)
Benefits of wireless communications through Digi Connect WAN include instant deployment, elimination of wiring costs and problems due to wire breaks, the ability to traverse firewalls, and the ability to move the connection virtually anywhere. B. Socket A socket is the mechanism that most popular operating systems provide to give programs access to the network. It allows messages to be sent and received between applications (unrelated processes) on different networked machines. The sockets mechanism has been created to be independent of any specific type of network. IP, however, is by far the most dominant network and the most popular use of sockets. This application provides an introduction to using sockets over the IP network (IPv4). The TCP Socket API offers a whole API to open and use a TCP connection. This allows app makers to implement any protocol available on top of TCP such as IMAP, IRC, POP, HTTP, etc., or even build their own to sustain any specific needs they could have.
Figure 2. Layout of TCP/IP D. Database
C. TCP/IP TCP and IP were developed by a Department of Defense (DOD) research project to connect a number different networks designed by different vendors into a network of network. It was initially successful because it delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a very large number of client and server systems. Several computers in a small department can use TCP/IP (along with other protocols) on a single LAN. The IP component provides routing from the department to the enterprise network, then to regional networks, and finally to the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and automatically recover from any node or phone line failure. This design allows the construction of very large networks with less central management. However, because of the automatic recovery, network problems can go u ndiagnosed and uncorrected for long periods of time. As with all other communications protocol, TCP/IP is composed of layers: First, IP is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world. Second, TCP is responsible for verifying the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received. Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most the system.
145
We use SQL database which collect data and permit us manage data. Database management systems (DBMS) are specially designed applications that interact with the user, other applications, and the database itself to capture and analyze data. With DBMS is software which permit create a table, query and administration of databases. Three many DBMS software such as MySQL, MariaDB, PostgreSQL, SQLite, Microsoft SQL Server, Oracle, SAP, dBASE, FoxPro, IBM DB2, LibreOffice Base and FileMaker Pro. Microsoft SQL Server is the most popular in this. Microsoft SQL Server is a DBMS which developed Microsoft. As a database, it is a software product whose primary function is to store and retrieve data as requested by other software applications, be it those on the same computer or those running on another computer across a network (including the Internet).
Figure 3: Database Object Here, we use free host which support many tool. In free host have support MS SQL server favorable to manage data. Beside in host also provide website manager permit we design quick website.
III.
FLOWCHART
First, Digi connect WAN receive SMS data with syntax. Then, Digi send data to PC via TCP socket. A GUI in computer open/close connection, wait data received. In GUI, we also filter SMS string because Data which Digi send to GUI is an arbitrary string. GUI check SMS string and send to excel. We must check SMS data before save to Excel. Because in fact, data can be error or have many coincidence SMS. Therefore, GUI must read data of Excel in order to check coincidence SMS or error. If coincidence SMS, we clear new SMS string. If error SMS, we must send feedback to cellphone. Difficulty is execution time in GUI. If execution time too big, when SMS new is received then GUI must stop currently programming and can cause many error when save to Excel. Therefore, we must decrease size of GUI. Digi send to data which is log of Digi. In log, we have a many string. We only interest to SMS string. Therefore, we must filter this string and MS visual 2010 support Regex tool. Regex tool help filter SMS string via logic and output data is string. Beside, MS visual 2010 also support read data of Excel via row and column. MS visual studio 2010 support many software tool help use connect, install, collect data and in terface with excel. Fist, START is idle state. Initial is start count row number and column number in Excel. Numbering for SMS via time received. This help us update excel reasonable. Install connection is responsible for writing a function in GUI. This function create connect TCP between Digi and Computer. Declaration Client is computer. Create object client is a TCP socket, protocol type is TCP. Declaration IP and port of Digi device. Then, Create connect with (connected) is function "catch" data. To start receive a SMS data to Digi, GUI will send a string command request Digi show smscell log. Command: display smscell recvlog=tail. Then, successful connection and wait SMS received. If we received a SMS, GUI would exchange string to ASCII character. If we did not receive a SMS, GUI will wait SMS. Then, we will filter SMS via Regex tool. Use logic function separate SMS and save SMS to Listview. Then, we check data. If error, we send feedback and wait SMS. If nor error, we send data to Excel and update data to Database in host. Then, we wait SMS.
Figure 4. Programing Flow Chart
IV.
CONCLUSION
In this paper, we use Digi in TCP socket, received SMS and collect data. Here, application is registration via SMS. It use such as sign on in class. In future, we can use this model to SCADA system. Advantage: First, application permit we can use many different tool to process data via computer. Finally, we blink collapse connection when we have many SMS which is sent simultaneous. Disadvantage: First, process rate of programing is not high because MS 2010 is very simple when we make in network and program must run unnecessary command. Here, we can use console when code because console is best command in process. Second, we work with host via GUI send string and string is query SQL. SQL is a somewhat complex. Here we can use tools which MS 2010 support to programing quickly. Result: Application is good in registration. It is used SMS registration in Rise Your Arm 2013. In this match, it received about 200 SMS which is sent by spectator in about 3 hour. During this time, it work good and has no error.
V. 1. 2.
3. 4.
5.
6.
R EFERENCES
Behrouz A. Forouzan, Data communications and networking. MC Graw Hill, 2001, second edition. Fred Hasall, Data communications, computer networks and open systems, Addition-Wesley, 1992, third edition. Jeffrey E. F. Friedl, Mastering Regular Expressions. Library, Socket, Asynchronous Server Socket Example, http://msdn.microsoft.com/enus/library/fx6588te.aspx Library, The TCP/IP model, Figure 2. Layout of TCP/IP, http://technet.microsoft.com/enus/library/cc786900(v=ws.10).aspx Chapter 9. Database Objects, Figure 3. Database object, http://www.jcorporate.com/expresso/doc/edg/edg_db objects.html