Paper Template in Two-Column Format - Learn and Practice

Loading...
CITEE 2012

ISSN: 2088-6578

PROCEEDINGS OF INTERNATONAL CONFERENCE ON INFORMATION TECHNOLOGY AND ELECTRICAL ENGINEERING Yogyakarta, 12 July 2012

DEPARTMENT OF ELECTRICAL ENGINEERING AND INFORMATION TECHNOLOGY FACULTY OF ENGINEERING GADJAH MADA UNIVERSITY

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

ORGANIZER 2012 Technical Program Committee  Andreas Timm-Giell (Universität HamburgHarburg Germany)  Ryuichi Shimada (Tokyo Institute of Technology, Japan)  Ismail Khalil Ibrahim (Johannes Kepler University Linz, Austria)  Kang Hyun Jo (University of Ulsan, Korea)  David Lopez (King’s College London, United Kingdom)  Martin Klepal (Cork Institute of Technology, Ireland)  Tamotsu Nimomiya (Nagasaki University, Japan)  Ekachai Leelarasmee (Chulalongkorn University, Thailand)  Marteen Weyn (Artesis University College, Belgium)  Chong Shen (Hainan University, China)  Haruichi Kanaya (Kyushu University, Japan)  Ramesh K. Pokharel (Kyushu University, Japan)  Ruibing Dong (Kyushu University, Japan)  Kentaro Fukushima (CRIEPI, Japan)  Mahmoud A. Abdelghany (Minia University, Egypt)  Sunil Singh (G B Pant University of Agriculture & Technology, India)  Abhishek Tomar (G B Pant University of Agriculture & Technology, India)  Lukito Edi Nugroho (Universitas Gadjah Mada, Indonesia)  Umar Khayam (Institut Teknologi Bandung, Indonesia)  Anton Satria Prabuwono (Universiti Kebangsaan Malaysia, Malaysia)  Eko Supriyanto (Universiti Teknologi Malaysia, Malaysia)  Kamal Zuhairi Zamli (Universiti Sains Malaysia, Malaysia)  Sohiful Anuar bin Zainol Murod (Universiti Malaysia Perlis, Malaysia ) Advisory Board  F. Danang Wijaya  Risanuri Hidayat General Chair  Widyawan Chair      ii

Eka Firmansyah Indriana Hidayah Eny Sukani Rahayu Avrin Nur Widyastuti Bimo Sunarfri Hantono

Chair (cont.)  Sigit Basuki Wibowo  Budi Setiyanto  Ridi Ferdiana  Yusuf Susilo Wijoyo  Adhistya Erna Permanasari  Prapto Nugroho  Muhammad Nur Rizal  Selo Sulistyo  Sunu Wibirama  Lilik Suyanti  Indria Purnamasari Reviewer  Adhistya Erna Permanasari  Agus Bejo  Avrin Nur Widyastuti  Bambang Sugiyantoro  Bambang Sutopo  Bimo Sunarfri Hantono  Bondhan Winduratna  Budi Setiyanto  Danang Wijaya  Eka Firmansyah  Enas Duhri Kusuma  Eny Sukani Rahayu  Harry Prabowo  Indriana Hidayah  Insap Santosa  Isnaeni  Iswandi  Litasari  Lukito Edi Nugroho  Noor Akhmad Setiawan  Prapto Nugroho  Ridi Ferdiana  Risanuri Hidayat  Rudy Hartanto  Samiadji Herdjunanto  Sarjiya  Sasongko Pramono Hadi  Selo  Sigit Basuki Wibowo  Silmi Fauziati  Suharyanto  Sujoko Sumaryono  Sunu Wibirama  T. Haryono  Teguh Bharata Adji  Wahyu Dewanto  Wahyuni  Warsun Nadjib  Widyawan  Yusuf Susilo Wijoyo DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

FOREWORD Welcome to this year’s CITEE 2012 in Yogyakarta. Peace be upon you. First of all, praise to Allah, for blessing us with healthy and ability to come here, in the Conference on Information Technology and Electrical Engineering 2012 (CITEE 2012). If there is some noticeable wisdoms and knowledge must come from Him. This conference is the fourth annual conference organized by the Department of Electrical Engineering and Information Technology, Faculty of Engineering, Universitas Gadjah Mada. It is expected that CITEE 2012 can serve as a forum for sharing knowledge and advances in the field of Information Technology and Electrical Engineering, especially between academic and industry researchers. On behalf of the committee members, I would like to say thank you to all of the writers, who come here enthusiastically to share experiences and knowledge. I also would like to say thank you to the keynote speakers for the participation and contribution in this conference. According to our record, there are 150 papers from 15 countries are being submitted to this conference and after underwent reviewing process there are 78 papers that will be presented. It is a 52% acceptance rate. There are 15 papers in the field of Power Systems, 26 papers in the area of Signals System and Circuits, 11 papers in Communication System and 26 papers in Information Technology. Furthermore, the proceedings of this conference is expected to be used as reference for the academic and practitioner alike. Finally, I would like to say thank you to all of the committee members, who worked tirelessly to prepare this conference. Special thank to IEEE Computer Society Indonesian Chapter, Department of Electrical Engineering and Information Technology UGM and LPPM UGM for the support, facility and funds. Thank you and enjoy the conference, CITEE 2012, and the city, Yogyakarta

12 July 2012

Widyawan

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

iii

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Schedule of CITEE 2012 Yogyakarta, 12 July 2012 07.30 – 08.00 08.00 – 08.10

08.10 – 08.50 08.50 – 09.30 09.30 – 10.00

Registration Opening Speech 1. Chairman of the Organizing Committee 2. Head of Department of Electrical Engineering and Information Technology of Gadjah Mada University PLENARY SESSION (at Room 1): Keynote Speech The Development Trend of a Next Generation Vehicle and its Propulsion Motor Professor Jin Hur, Ph.D., University of Ulsan, Korea Moderator: Eka Firmansyah Applied VLSI Research in Indonesia Eko Fajar Nurprasetyo, Ph.D., Xirka Silicon Technology Moderator: Iswandi Morning Coffee Break PARALLEL SESSION Allocated duration per paper  GREEN lamp  YELLOW lamp  RED lamp

Ses. 2B

Ses. 2A

Ses. 1B

Ses. 1A

No

Time Moderator

1. 2. 3.

10.00 – 10.20 10.20 – 10.40 10.40 – 11.00 Moderator

4. 5. 6.

11.00 – 11.20 11.20 – 11.40 11.40 – 12.00 12.00 – 13.00 Moderator

7. 8. 9.

13.00 – 13.20 13.20 – 13.40 13.40 – 14.00 Moderator

10. 11. 12.

14.00 – 14.20 14.20 – 14.40 14.40 – 15.00

: : : :

Room 1 Adha I.C. (S-Mas #11) S-Bndg #11 S-Srby #12 S-Srby #11 Nanang S. (S-Srby #11) S-MAS #11 S-Bndg #12 S-Smrg #12

Room 2 Anindito Y. (I-Jkrt #11) I-Jkrt #12 I-Smrg #12 I-Bndg #11 Hariandi M. (I-Bndg #11) I-Jkrt #11 I-Smrg #14 I-Smrg #11

Room 3 A. Suhartomo (C-Jkrt #12) C-Srby #12 C-Jkrt #11 C-Srby #11 M. Agus Z. (C-Srby #11) C-Jkrt #12 C-TEIa #11 C-TEIb #11

Linggo S. (S-Yog #11) S-Smrg #11 S-Mlng #12 S-Mlng #11 Aryuanto S. (S-Mlng #11) S-Yog #11 S-Jkrt #11 S-IND #11

Catur S. (I-Smrg #11) I-Smrg #13 I-UGM #12 I-UGM #11 Eka K. (I-UGM #11) I-TEIb #11 I-TEIb #12 I-THA #11

M. Denny S. (S-Smrg #12) S-Srby #13 S-Pwt #11 S-UGM #11 Fahri F. (I-Smrg #13) I-JPN #11 I-IND #11 C-EGY #11

20 minutes (max.) 10 minutes (max.) presentation 10 minutes (max.) discussion END of allocated duration Room 4 A. Syakur (P-TEIa #13) P-Bntn #11 P-Pwt #11 P-TEIa #11 Alief R.M. (P-TEIa #11) P-TEIa #13 P-Pwt #12 P-Smrg #11 Lunch Break Supari (P-Smrg #11) C-TEIb #13 C-TEIb #12 P-TEIa #12 Arif Jaya (P-TEIa #12) P-IND #11 P-IRI #11 I-ALG #11

Room 5 Raymond B. (I-Jkrt #21) I-Bndg #21 I-UGM #21 I-TEIa #21 Amien R. (I-TEIa #21) I-Jkrt #21 I-TEIa #25 I-TEIa #24

Room 6 Room 7 Sarjiya Gunawan W. (P-TEIb #22) (S-Jkrt #21) P-Kpng #21 S-Riau #21 P-Mlng #21 S-Smrg #21 P-TEIb #21 S-TEIa #22 Bambang S. Indra A. (P-TEIb #21) (S-TEIa #22) P-TEIb #22 S-Jkrt #21 P-TEIb #23 S-Srby #21 P-TEIa #21 S-TEIa #21

Chairani (I-TEIa #23) I-TEIa #26 I-TEIa #27 I-TEIa #28 Zawiyah S. (I-TEIa #28) I-TEIa #23 I-TEIa #22 C-Bndg #11

Ridwan W. Hari M. (P-TEIa #21) (S-TEIa #21) C-TEIa #21 S-Yog #21 S-TEIa #23 S-UGM #22 S-TEIb #21 S-UGM #21

Paper codes (see Table of Contents for the details):  Number #1X: International (English) papers/presentations, Number #2X: National (Indonesia) papers/presentations  I, P, S, C : Information, Power, Signal/System/Circuit, Communication Certificate of presentation will be available at the room of presentation, immediately after the paper is presented Rooms Location:  Room 1-4 : located at the “Kantor Pusat Fakultas Teknik (KPFT)” building  Room 5-7 : located at the “Jurusan Teknik Elektro dan Teknologi Informasi (JTETI)” building, 50 m from KPFT

iv

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Table of Contents Inner Cover Organizer Foreword Schedule Table of Contents

i ii iii iv v

1.

I-ALG #11

Enabling Real-time Alert Correlation Using Complex Event Processing Hichem Debbi, Bilal Lounnas, and Abdelhak Bentaleb

1

2.

I-JPN #11

Detection and Verification of Potential Peat Fire Using Wireless Sensor Network and UAV Rony Teguh, Toshihisa Honma, Aswin Usop, Heosin Shin, and Hajime Igarashi

6

3.

I-IND #11

Data Mining Application to Reduce Dropout Rate of Engineering Students Munindra Kumar Singh, Brijesh Kumar Bharadwaj, and Saurabh Pal

11

4.

I-THA #11

Usability Standard and Mobile Phone Usage Investigation for the Elderly V. Chongsuphajaisiddhi, V. Vanijja, and O. Chotchuang

15

5.

I-Jkrt #11

Test on Interface Design and Usability of Indonesia Official Tourism Website Anindito Yoga Pratama, Dea Adlina, Nadia Rahmah Al Mukarrohmah, Puji Sularsih, and Dewi Agushinta R.

21

6.

I-Jkrt #12

Data Warehouse for Study Program Evaluation Reporting Based on Self Evaluation (EPSBED) using EPSBED Data Warehouse Model: Case Study Budi Luhur University Indra, Yudho Giri Sucahyo, and Windarto

25

7.

I-Bndg #11

Denial of Service Prediction with Fuzzy Logic and Intention Specification Hariandi Maulid

32

8.

I-Smrg #11

Integrating Feature-Based Document Summarization as Feature Reduction in Document Clustering Catur Supriyanto, Abu Salam, and Abdul Syukur

39

9.

I-Smrg #12

A GPGPU Approach to Accelerate Ant Swarm Optimization Rough Reducts (ASORR) Algorithm Erika Devi Udayanti, Yun-Huoy Choo, Azah Kamilah Muda, and Fajar Agung Nugroho

43

10.

I-Smrg #13

Loose-Coupled Push Synchronization Framework to Improve Data Availability in Mobile Database Fahri Firdausillah, and Norhaziah Md. Salleh

48

11.

I-Smrg #14

Feature Extraction on Offline Handwritten Signature using PCA and LDA for Verification System Fajrian Nur Adnan, Erwin Hidayat, Ika Novita Dewi, and Azah Kamilah Muda

53

12.

I-UGM #11

Cognitive Agent Based Modeling of a Question Answering System Eka Karyawati, and Azhari SN

58

13.

I-UGM #12

GamaCloud: The Development of Cluster and Grid Models based shared-memory and MPI Mardhani Riasetiawan

65

14.

I-TEIb #11

Sub-Trajectory Clustering for Refinement of a Robot Controller Indriana Hidayah

70

15.

I-TEIb #12

Transfer Rules Algorithm for Hierarchical Phrase-based English-Indonesian MT Using ADJ Technique Teguh Bharata Adji

74

16.

P-IRI #11

Investigation of Insulation Coordination in EHV Mixed Line A.Eyni, and A.Gholami

77

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

v

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

17.

P-IND #11

Unbalance and Harmonic Analysis in a 15-bus Network for Linear and Nonlinear Loads T.Sridevi, and K.Ramesh Reddy

82

18.

P-Bntn #11

The Maximizing of Electrical Energy Supply for Production of Steel Plant through Fault Current Limiter Implementation Haryanta

88

19.

P-Smrg #11

Modelling and Stability Analysis of Induction Motor Driven Ship Propulsion Supari, Titik Nurhayati, Andi Kurniawan Nugroho, I Made Yulistya Negara, and Mochamad Ashari

93

20.

P-Pwt #11

A New Five-Level Current-Source PWM Inverter for Grid Connected Photovoltaics Suroso, Hari Prasetijo, Daru Tri Nugroho, and Toshihiko Noguchi

98

21.

P-Pwt #12

Optimal Scheduling of Hybrid Renewable Generation System Using Mix Integer Linear Programming Winasis, Sarjiya, and T Haryono

104

22.

P-TEIa #11

Design and Implementation Human Machine Interface for Remote Monitoring of SCADA Connected Low-Cost Microhydro Power Plant Alief Rakhman Mukhtar, Suharyanto, and Eka Firmansyah

109

23.

P-TEIa #12

Equivalent Salt Deposit Density and Flashover Voltage of Epoxy Polysiloxane Polymeric Insulator Material with Rice Husk Ash Filler in Tropical Climate Area Arif Jaya, Tumiran, Hamzah Berahim, and Rochmadi

113

24.

P-TEIa #13

Tracking Index of Epoxy Resin Insulator Abdul Syakur, Tumiran, Hamzah Berahim, and Rochmadi

119

25.

S-IND #11

Semiconducting CNTFET Based Half Adder/Subtractor Using Reversible Gates V.Saravanan, and V.Kannan

124

26.

S-MAS #11

Preliminary Design for Teleoperation Systems under Nonholonomic Constraints Adha I. Cahyadi, Rubiyah Yusof, Bambang Sutopo, Marzuki Khalid, and Yoshio Yamamoto

129

27.

S-Jkrt #11

PACS Performance Analysis In Hospital Radiology Unit Sugeng Riyadi, and Indra Riyanto

134

28.

S-Srby #11

Middleware Framework of AUV using RT-CORBA Nanang Syahroni, and Jae Weon Choi

141

29.

S-Srby #12

Convolute Binary Weighted Based Frontal Face Feature Extraction for Robust Person’s Identification Bima Sena Bayu D., and Jun Miura

147

30.

S-Srby #13

Evaluation of Speech Recognition Rate on Cochlear Implant Nuryani, and Dhany Arifianto

153

31.

S-Mlng #11

Remote Laboratory Over the Internet for DC Motor Experiment Aryuanto Soetedjo, Yusuf Ismail Nakhoda, and Ibrahim Ashari

157

32.

S-Mlng #12

Comparative Analysis of Neural Fuzzy and PI Controller for Speed Control of Three Phase Induction Motor Ratna Ika Putri, Mila Fauziyah, and Agus Setiawan

162

33.

S-Bndg #11

Programmable Potentiostat Based ATMEL Microcontroller for Biosensors Application Erry Dwi Kurniawan, and Robeth V. Manurung

168

34.

S-Bndg #12

Dummy State: An Improvement Model for Hybrid System Sumadi, Kuspriyanto, and Iyas Munawar

172

vi

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

35.

S-Smrg #11

Fuzzy C-Means Algorithm for Adaptive Threshold on Alpha Matting R. Suko Basuki, Moch. Hariadi, and R. Anggi Pramunendar

177

36.

S-Smrg #12

Analysis of Performance Simulator Water Level Control at Fresh Water Tank in PLTU with Microcontroller M Denny Surindra

182

37.

S-Pwt #11

An FPGA Implementation of Automatic Censoring Algorithms for Radar Target Detection Imron Rosyadi

187

38.

S-Yog #11

On The Influence of Random Seeds Evaluation Range in Generating a Combination of Backpropagation Neural Networks Linggo Sumarno

195

39.

S-UGM #11

Analisis EEG Menggunakan Transformasi Fourier Waktu-singkat dan Wavelet Kontinu: Studi Kasus Pengaruh Bacaan al Quran Agfianto Eko Putra, and Putrisia Hendra Ningrum Adiaty

201

40.

C-EGY #11

Interference Mitigation for Self-Organized LTE-Femtocells Network Nancy Diaa El-Din, Karim G. Seddik, Ibrahim A. Ghaleb, and Essam A. Sourour

211

41.

C-Jkrt

Dual-Band Antenna Notched Characteristic with Co-Planar Waveguide Fed Rastanto Hadinegoro, Yuli Kurnia Ningsih, and Henry Chandra

216

#11 42.

C-Jkrt #12

Edge – Component Order Connectivity Issue In Designing MIMO Antennas Antonius Suhartomo

220

43.

C-Srby #11

Robust Image Transmission Using Co-operative LDPC Decoding Over AWGN Channels M. Agus Zainuddin, and Yoedy Moegiharto

226

44.

C-Srby #12

Parameter Measurement of Acoustic Propagation in the Shallow Water Environment Tri Budi Santoso, Endang Widjiati, Wirawan, and Gamantyo Hendrantoro

231

45.

C-Bndg #11

Radio Network Planning for DVB-T Repeater System Integrated with Early Warning System Herry Imanta Sitepu, Dina Angela, Tunggul Arief Nugroho, and Sinung Suakanto

235

46.

C-TEIa #11

Considering Power Consumption of Wireless Stations Based on 802.11 Mode Control Ramadhan Praditya Putra, Sujoko Sumaryono, and Sri Suning Kusumawardani

240

47.

C-TEIb #11

The Design Model of 2.4 GHz Dual Biquad Microstrip Antenna Wahyu Dewanto, Penalar Arif, and HC. Yohannes

245

48.

C-TEIb #12

Voice Service Performance of Broadband Cellular Systems Wahyu Dewanto, and Imam Bagus Santoso

252

49.

C-TEIb #13

Capacity and Quality of Single Cell WiMAX Network for VoIP Application Budi Setiyanto, Esmining Mitarum, and Sigit Basuki Wibowo

257

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

vii

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Enabling Real-time Alert Correlation Using Complex Event Processing Hichem Debbi, Bilal Lounnas Department of Computer Science University of M'sila, Algeria [email protected], [email protected]

Abstract² To cope with the large amount of security alerts generated from different Intrusion Detection Systems (IDSs) which include usually many false alerts, different alert correlation techniques have been proposed. The real-time correlation techniques could mitigate the overload of alerts and give accurate analysis, thus detecting the attacks and giving the ability to respond in early stage. In this paper we propose a realtime alert correlation technique based on Complex Event Processing (CEP). We use the CEP engine ESPER as an alert correlation engine. Keywords² Intrusion Detection System, Alert Correlation, Similarity, Complex Event Processing, ESPER.

I.

INTRODUCTION

With the increasing threat of intrusion attacks through the internet, Security devices such as Intrusion Detection Systems (IDSs) could be very useful for detecting the on-going attacks. IDSs are software applications that monitor the network traffic and the host activities in order to detect abnormal behavior and notify the administrators. From suspicious events, the IDS generate reports containing information concerning the observed events in form of alerts. Despite the advantages of IDS, they still suffer from many drawbacks. Due to the enormous number of false alerts generated by the IDS, the DGPLQLVWUDWRUV FRXOGQ¶W GLIIHUHQWLDWH WKH UHDO DOHUWV IURP WKH false ones; therefore they could find a difficulty in defining and preventing the on-going attacks. To get over these limitations, alert correlation techniques were proposed. The role of alert correlation methods is to generate high level alerts from the low level alerts generated by the IDS in meaningful way, thus helping the administrators for highlighting the real attacks. Similarity based approach is One of the most known methods, this approach is based on computing the similarity between alert attributes, and then grouping the alerts through computing attribute similarity values. For example, network based IDSs report the suspicious HYHQW¶V VRXUFH ,3 DGGUHVV VRXUFH SRUW QXPEHU GHVWLQDWLRQ ,3 address, destination port number, and timestamp information. Based on these attribute values, similarity based approaches first compute how similar two or more alerts are, and then group alerts together based on these computed similarity values [1]. While most of alert correlation methods proposed perform in offline mode, few works that mentioned real-time correlation of security alerts. With the increase of network traffic and the

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Abdelhak Bentaleb Department of Computer Science University of BBA, Algeria [email protected]

heterogonous security events generated from multiple detection devices, real-time Alert correlation systems that perform in online could have a great advantage over the systems that perform in offline mode. Performing real-time analysis of security events as they occur could mitigate the redundancy and the overload of alerts providing more accuracy, thus helping the security administrators to predict and prevent the next step attacks [2]. Event Driven Architecture (EDA) is a software architecture refers to generation, reaction, detection and consumption of events that represent notable changes in the VWDWH RI HQWHUSULVH¶V DFWLYLWLHV &RPSOH[ (YHQW 3URFHVVLQJ (CEP) is an EDA style consists of processing different events within the distributed enterprise system attempting to discover interesting information in timely manner. For enterprises, it is very necessary to react immediately for this information, because this information might represent an opportunity or threat. In this paper we introduce a real-time alert correlation technique based on CEP. We define filtering patterns for the primitive alerts in the first phase, and then we use an eventbased similarity approach introduced by [3] to cope with the alerts generated from the first phase. We use the expressive language of the CEP engine ESPER to analyze the security alerts. The rest of this paper is organized as follows. In section 2 we discuss the related works. Section 3 provides an overview on CEP technology. Section 4 presents the alert correlation technique and its implementation and Section 5 concludes the paper. II.

RELATED WORKS

Many works have investigated the analysis of intrusion alerts to detect and understand the attacks behaviors and thus stopping them. While most of these works have addressed the offline analysis, recently few works have addressed the online analysis of security alerts. Lee et al. [4] have proposed a multi-component system having a real-time processing capability, this system basically depends on probabilistic similarity approach. The system is based on the following components: Filter, it collects the alerts from the different sensors and eliminates the redundancies among them. The control center, it receives the filtered alerts and saves them in database to be analyzed later. The aggregator, it applies the similarity functions on the events

1

ISSN: 2088-6578

Yogyakarta, 12 July 2012

collected and generates meta-alerts for further analysis. Correlator, it analyses the timing and causal relation between the aggregated events to generate attack scenarios. Situator, is a new component that could detect such situations in the early stage of the attack and reduces the response time. The framework proposed in [5] is not far from the previous one, in way that is based on modules that collect, analyze and discover attack strategies from the evolving alerts. The first module is Alert collection module, it receives the primitive alerts and puts them into stream queue where they can be analyzed later by online alert clustering module, this in contrast to [4] where the alerts are stored in database. The second module is online alert clustering module, it creates hyper-level alerts from the primitive ones by applying clustering algorithm, thus reducing the number of alerts. The third module is Alert correlation module, it uses pre-specified attack scenario based on time windows to mine attack sequence patterns that may occur within these windows. According to A. Farroukh et al. [6], the accuracy and speed of current Intrusion Detection Systems can be significantly improved by using event processing techniques that can parse and analyze traffic beyond the network layer and detect more sophisticated attacks which require correlation between multiple protocol data units. Starting from this hypothesis, they used the event processing technique in the core of the IDS and proposed novel algorithms to efficiently match vulnerability signatures. L. Aniello et al. [7] have proposed architecture for detecting inter-domain port scanning activities based on CEP. The process of detection is carried out in a cooperative fashion by correlating network traffic data coming from geographically distributed enterprise nodes. After the traffic data received by the CEP engine, continuous queries are effected on it to discover the spatial and/or temporal relationships among apparently uncorrelated data that would have been undetected by in-house IDSs. These queries are executed within the engine ESPER [8]. ESPER is the most known and used open source CEP engine. ESPER is a scalable engine has the ability to analyze thousands of events per second across all enterprise layers. The engine uses the Event Processing Language (EPL) for dealing with the high frequency event data. ESPER provides two principal mechanisms to process events: event stream queries and event patterns. The first mechanism addresses the event stream analysis requirements using windows, joining and analysis functions. The second mechanism addresses the expected sequences of presence or absence of events or combinations of events such as event A and B occur in either order followed by event C or D. If we consider that the process of alert correlation consists of three steps: alert preprocessing, alert correlation and discovering attack strategies, and If we consider that alert correlation as a CEP application, we will say the following: the first and the second steps could be carried out by ESPER using the first mechanism (event stream analysis), and the third step could be carried out by ESPER using the second mechanism (event patterns).

2

CITEE 2012

In this paper we propose an alert correlation technique based on CEP. Our technique addresses just the first and the VHFRQG VWHSV IURP WKH DOHUW FRUUHODWLRQ SURFHVV LW GRHVQ¶W address the detection of attacks. Using the first mechanism of ESPER, we define continuous queries against the alerts coming from different sensors, then the filtered alerts will be correlated GLUHFWO\ ZLWKLQ WKH (63(5 HQJLQH LQ RWKHU PHDQ ZH GRQ¶W have here to use an intermediate database, this is in contrast to [4]. In the correlation step we use a similarity-based approach introduced by [3]. III.

COMPLEX EVENT PROCESSING

A. Architecture Components The CEP consists of two main components, the Adapters and the event processing engine, (See Fig. 1). The event processing engine is the core of CEP architecture; it can processes and analyzes thousands of upcoming events from different external sources with low latency. By correlating these events across multiple streams, the engine generates complex events ready to be delivered to different destinations (applications, data Warehouses, GDVKERDUGV« 7KH HQJLQH LV EDVHG RQ HYHQW SURFHVVLQJ language that manipulates event data in real-time. The adapter is a software layer that interacts with stream of events enabling the CEP engine to take place in complex environments. Regarding to event sources and destinations, we distinguish two types of adapters, input and output adapters respectively .their role is converting the events into compatible formats to be processed by the engine, and delivering them to destinations in compatible formats to be consumed.

Fig.1. CEP Architecture

B. Main Features The main features of CEP technology are: 1) Filtering: In complex and dynamic systems, we can capture thousands of events and we have to filter them to get only the interesting events in critical time. This feature is very useful in detection and alerting systems such as fraud detection and intrusion detection. 2) Aggregation and abstraction hierachy: We say that an event is complex or high level event, if it is generated by aggregating set of events. These events are called the members and represent basic activities in the system. For example, by using the timestamp of each event we can create a complex event that match specific pattern such as Event A and B occur in either order followed by event C or D. We say that an event has an abstraction hierarchy if it consists of

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

sequence levels of activities and aggregations patterns in each level, where each activity is signified by a specific event. 3) Causality: According to D. Luckham, the father of CEP [9], the Causality represents a dependence relationship EHWZHHQ DFWLYLWLHV LQ WKH V\VWHP ³,I WKH DFWLYLW\ VLJQLILHG E\ the event A had to happen in order for the activity VLJQLILHGE\HYHQW%WRKDSSHQWKHQ$FDXVHG%´:LWK CEP technology, we can consider an event as a piece of data having two essential parameters, which are the timestamp and the causal vector. The timestamp indicates when the event has happened, whereas the causal vector contains the identifiers of set of events that have caused this event. Placing the causal vector in the event facilitates the tracking of causality in complex systems. IV.

ISSN: 2088-6578

EPStatement Filter_Statement1 = cepAdm.createEPL("insert into InputAlertStreamFyltred select * from InputAlertStream (signature != 'IIS Exploite')");

The second statement makes sure that we will not have redundant alerts, by comparing the attributes of the last alert with the attributes of the previous alerts. The filtered alerts will EHLQVHUWHGLQWR³,QSXW$OHUW6WUHDP)\OWUHG´IRUIXrther analysis. EPStatement Filter_Statement2 = cepAdm.createEPL("insert into AlertStreamFyltred select * from InputAlertStream.win:time(20 minutes) as ALS1, LastAlsertStream.win:time(20 minutes) as ALS2 where ((ALS1.signature!=ALS2.signature) or ALS1.SIP!=ALS2.SIP) or (ALS1.DIP!=ALS2.DIP)or(ALS1.SPort!=ALS2.SPort)or (

The last alert is determined from the input stream as follow:

ARCHITECTURE AND IMPLIMENTATION

As we see in Fig. 2, the architecture is based on CEP engine ESPER, Both the filtering task and the correlation task are carried out within ESPER. Against the alerts coming from different intrusion detection sensors, filtering queries are executed to eliminate the irrelevant alerts and redundancies among them, and then the remaining alerts are passed to the correlation phase. In this phase an event based similarity is performed on these alerts using analysis functions, aggregation and windows, thus producing high-level alerts that could be understood and analyzed by the user.

IDS1

B. Correlation To compute the similarity between two objects, common characteristics are needed; if the computed similarity based on these characteristics exceeds a predefined threshold we can then decide that the two objects are similar. Because the alerts in the end are just events of interest, we use an event based similarity to compute the similarity between the alerts. All the security alerts regardless of their format have in general the following attributes: generating time (TimeStamp), signature (Signature), source IP address (SIP), source port (SPort), destination IP address (DIP) and destination port (DPort). These attributes have either numerical type (IP, Port, Time) or character type (Signature), therefore this similarity method consists of similarity of character feature subset and similarity of numerical feature subset.

Engine

Real-time Analysis

Filtering + Correlation

IDS2 Fig.2. Real-time alert correlation architecture

A. Filtering Applying the filtering mechanism against continuous alerts is very important before forwarding to alert correlation. Whereas we might have many irrelevant alerts, we might also have redundant ones, therefore to cope with these challenges we define the following EPL statements. The first statement makes sure that we will not have irrelevant alerWVIRUH[DPSOHLIZHGRQ¶WKDYHDQ,,6VHUYHUDW all, such alert having DQ³,,6([SORLW´DVVLJQDWXUHLVLUUHOHYDQW for us.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

EPStatement Last_Statement = cepAdm.createEPL("insert into LastAlsertStream select * from ’—–Ž‡”––”‡ƒŜ•–†śŽƒ•–‡˜‡–ſƀŪ);

ESPER has an interesting feature which is the user-defined function (UDF). The user must define this function as a public static method to be resolved by the engine and invoked within EPL statement. We implement UDFs for computing the similarity between the events, and then we invoke these functions within the EPL statements. We will present the formal definition of the similarity functions and their implementation, and we will present the EPL statements and how they could be defined and executed. The character similarity of two events is computed as follow: ௣

ܵ݅݉௖௛௔ ൫݁௜ ǡ ݁௝ ൯ ൌ  ෍

߮൫݂௜௞ ǡ ݂௝௞ ൯  ‫݌‬ ௞ୀଵ

Ͳǡ ൫݂௜௞ ് ݂௝௞ ൯ Where ߮൫݂௜௞ ǡ ݂௝௞ ൯ ൌ  ቊ ǡ ͳǡ ൫݂௜௞ ൌ ݂௝௞ ൯ p is the number of the character attribute included in the security. Event ˆ୧୩ and ˆ୨୩  are respectively the character attribute values of event ‡୧ and ‡୨ .

3

ISSN: 2088-6578

Yogyakarta, 12 July 2012

The implementation of this function to signature attribute is as follow: public static double

GetCharSimilarity(String sgntr1, String sgntr2)

{ int i=0; double Similar_Value = 0; int max = Math.max(sgntr1.length(),sgntr2.length()); int min = Math.min(sgntr1.length(),sgntr2.length()); while (i
The numerical similarity of two events includes the following attributes: TimeStamp, source IP (SIP), destination IP (DIP), source port (SPort), and destination port (DPort). The similarity is computed as follow:

ܵ݅݉௡௨௠ ൫݁௜ ǡ ݁௝ ൯ ൌ 

σ௡௙ୀଵ ߱௙ ܵ݅݉௙ ሺ݁௜ ǡ ݁௝ ሻ σ௡௙ୀଵ ߱௙

Where ߱௙ is the corresponding weight of ݂ attribute, ܵ݅݉௙ ൫݁௜ ǡ ݁௝ ൯ is the corresponding similarity of ݂ attribute between security events ݁௜ and ݁௝ . The implementation of this function is as follow : public static double GetNumSimilarity(double ipsrc_simil, double ipdst_simil, double psrc_simil, double pdst_simil, double time_simil) { Num_Similarity = (ipsrc_simil * w1 + ipdst_simil * w2 + psrc_simil * w3 + pdst_simil * w4 + time_simil * w5) / (w1 + w2 + w3 + w4 + w5); return Num_Similarity; }

Where (ipsrc_simil, ipdst_simil, psrc_simil, ppdst_simil, time_simil) represent the attributes similarities, and the (w1, w2, w3, w4, w5) represent their weighs. The final similarity between two alerts is computed as follow ܵ݅݉൫݁௜ ǡ ݁௝ ൯ ൌ ߤܵ݅݉௖௛௔ ൫݁௜ ǡ ݁௝ ൯ ൅ ሺͳ െ ߤሻܵ݅݉௡௨௠ ሺ݁௜ ǡ ݁௝ ሻ Where ù represents the weight factor that adjusts the character and numerical attributes.

4

CITEE 2012

The implementation of this function is as follow: public static double GetAlertSimilarity ( double char_similarity, double num_similarity) { Similarity = (ù* char_similarity) + ((1 - ù) * num_similarity); return Similarity; }

After defining the similarities functions, we define the CEP statement that correlates alerts through invoking the similarities functions. This statement performs on the filtered alerts as follow: EPStatement Similarity_Statement = cepAdm.createEPL("insert into AlertStreamCorrelated select ALS1.ID, ALS2.ID, GetAlertSimilarity(GetCharSimilarity (ALS1.signature,ALS2.signature), GetNumSimilarity(GetIPSimilarity(ALS1.SIP,ALS1.SIP),GetIP Similarity(ALS1.DIP,ALS1.DIP), GetPortSimilarity(ALS1.SPort,ALS2.SPort),GetPortSimilarit y(ALS1.DPort,ALS2.DPort), GetTimeSimilarity(ALS1.timeStamp.toDate(),ALS2.timeStamp. toDate())))as Alert_Similarity from AlertStreamFyltred.win:time(20 minutes) as ALS1, LastAlsertStream.win:time(20 minutes) as ALS2 where ALS1.ID != ALS2.ID" );

After computing the similarity between alerts, just the alerts that exceed the threshold will be correlated. This task is done using the following statement: EPStatement OutputStream_Statement = cepAdm.createEPL("select * from AlertStreamCorrelated (Alert_Similarity > Similarity_var_Threshold)");

In these queries we use 20 minutes as time threshold because it is widely adopted. We should note that this threshold as well as the Similarity threshold and the weights of attributes similarities should be defined by the user, in other words applying these values could be very effective in a network and not in another. V.

CONCLUSION AND FUTER WORKS

Analyzing security alerts and Understanding attacks instantly become more and more challenging task, therefore real-time correlation techniques could be very useful to cope with this kind of challenges. In this paper we propose a realtime alert correlation technique implemented within the CEP engine ESPER and uses an attribute similarity. The real-time performance is guaranteed since we use an engine having the ability to analyze thousands of events per second using an expressive language EPL which is defined to perform on timely conditions. In this paper we showed how the real-time alert correlation can be performed in easy and scalable way using CEP. As a future work, we aim to define a new event-based similarity to perform specially on alerts attributes in real-time. Using the second mechanism of ESPER (event patterns), we aim also to

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

define attacks patterns based on conjunction between the realtime alerts and historical data to detect the on-going attacks in real-time.

[4]

[5]

REFERENCES [6] [1]

[2]

[3]

S.O. AL-MAMORY and H.L. ZHANG, ³A Survey on IDS Alerts Processing Techniques,´ in 6th WSEAS International Conference on Information Security and Privacy, 2007. Z. Li, A. Zhang , J. Lei and L. Wang ³Real-Time Correlation of Network Security Alerts´ in IEEE International Conference on eBusiness Engineering, pp. 73 - 80, 2007. G. Zhaojun and Y. LI³Research of Security Event Correlation based on Attribute Similarity´ International Journal of Digital Content Technology and its Applications, vol. 5, no. 6, pp. 222-228, 2011.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[7]

[8] [9]

ISSN: 2088-6578

S. Lee, B. Chung and H. Kim³Real-time analysis of intrusion detection alerts via correlation´ Computers & Security, vol. 25, no. 3, pp. 169183, 2006.. J. Ma, Z. Li and W. Li ³Real-Time Alert Stream Clustering and correlation´ Fifth International Conference on Fuzzy Systems and Knowledge Discovery, pp. 379 - 384, 2008. A. Farroukh, M. Sadoghi, and H. Jacobsen, ³7RZDUGV 9XOQHUDELOLW\Based Intrusion Detection with Event Processing,´ in proceedings of the 5th ACM international conference on Distributed event-based system, pp. 171-182, 2011. L. Aniello, G. Lodi and Roberto Baldoni, ³,QWHU-Domain Stealthy Port Scan Detection through Complex Event Processing,´ in Procedings the 13th European Workshop on Dependable Computing, 2011. http://esper.codehaus.org/ D. Luckham , The Power of Events: An Introduction to Complex Event Processing in Distributed Enterprise Systems. Addison-Wesley, 2002.

5

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Detection and Verification of Potential Peat Fire Using Wireless Sensor Network and UAV Rony Teguh1, Toshihisa Honma1, Aswin Usop2, Heosin Shin3 and Hajime Igarashi1 1

Graduate School of Information Science and Technology, Hokkaido University, Japan

Email: {ronyteguh,honma}@ist.hokudai.ac.jp, [email protected] 2

University of Palangka Raya, Indonesia

[email protected] School of Computer Science and Engineering, Seoul National University, Seoul Korea Email: [email protected]

3

Abstract— this paper proposes an effective technique to quickly detect and monitor peat forest based on wireless sensor network (WSN) and unmanned aerial vehicle (UAV) with focusing on their implementation and deployment in Central Kalimantan, Indonesia. Problems found in monitoring fires in peat lands are that satellites will not detect weak or small fires properly. Moreover, big fire will not easily found in a haze or low visibility. Our WSN contains of miniature sensor nodes to collect environmental data such as temperature, relative humidity, light and barometric pressure, and to transmit more accurate information to fire patrol and remote monitor. We have verified WSN data collected from the ground sensing against the video surveillance data obtained from a UAV it is used for ground verification of satellite data in large peat forest areas. Keywords: wireless sensor networks; peat-forest; unmanned aerial vehicle; wildfire monitoring; potential fire.

I.

INTRODUCTION

A peat forest is one of the most important renewable natural resources that play significant roles in the human life and environment. Typical peat-forest fires are natural phenomena. In recent years, lingering dry weather, rapidly expanding exploitation of tropical forests, and the demand for conversion of forest to farmland have become serious with the increase of peat-forest fire size. Peat-forest fires are also serious disasters in terms of loss of both property and life. In Central Kalimantan, peatforest fires are mostly anthropogenic. Fires are used by local and immigrant farmers as part of small farmland activities such as land clearance. During droughts, some fires have spread out of control and become wildfires in peatland areas [1]. Fires in peatland not only burn the surface vegetation, but also the peat deposits up to 100 cm below the surface. However, peat fire occurs only in extreme drought conditions or after the ground water level has been lowered artificially. Peat fires produce large amount of smoke and deteriorate air quality; the dense haze also causes various health problems. We can see peat fires have negative impacts on economy, human health, environment, and climate [2]. One way to monitor peat-forest fires in Central Kalimantan, Indonesia is to use a wireless sensor network (WSN), which contains miniature sensor nodes to collect environmental data such as temperature, relative humidity, light and barometric pressure, and to transmit more

6

accurate information to fire patrol and remote monitor. Recently, considerable advances have been made in hardware and software technologies for building wireless sensor networks. The applications of wireless sensor network to field monitor have several research themes including low energy consumption, transmission range and communication, radio wave propagation, optimal routing, database management, and data compression security [3]. Identification of potential peat-forest fires and fire zones has been done by remote sensing [4] [5]. The accuracy and reliability of satellite-based systems are largely influenced by weather conditions. Traditionally, the fire monitoring task was performed by human observations but the reliability of this method is in doubt. Satellite imaging can be used to detect large areas, where the minimum detectable fire size is 0.1 hectare, and the fire location accuracy is 1 km. For fire detection complete images of land are collected every 1 to 2 days; so, the systems cannot provide timely detection. In this paper, in order to get real-time monitoring data of peatforest fires, we consider the integration system of the WSN data used for the ground sensing with video surveillance data obtained from an unmanned aerial vehicle (UAV), which is used for ground verification of satellite data in large peat forest areas. In data processing of WSN in collaboration with UAV work, the most important issue is to allow quick responses in order to minimize the scale of the peat-forest fires. Our experiments, use of WSN and UAV in peat-lands as detection and verification has never been done everywhere. Integration of the system for detect peat fires using WSN and UAV, it is useful for help fire patrol to fire findings and measurement of fire size, counting of fires. II.

CHARACTERISTICS OF PEAT FIRE IN TROPICAL PEATLAND

Peat fire occurs when three necessary conditions are satisfied at the same time: 1). flammable material, 2). oxygen, 3). high temperature. The source of wildfire in Indonesia is farmland activities such as land clearance and to produce ash for fertilizer. Fires in peatland not only burn the surface, but also the underground. Weather conditions have significant influences on fire behavior. The dry season in Central Kalimantan is normally two

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

months per year, July and August, while abnormally dry season lasts for 4-5 months. Hydrological system such as groundwater level and moisture of surface peat are important keys for peat fire control in tropical peat land. Spread rate of tropical peat fire on surface peat has an average of 42-155 cm/day, and in subsurface peat 12-60 cm/day. Flaming temperature is around 300-600ºC with duration more than 20 minute. As mentioned above, the peat fire starts with overheating as a result of weather conditions. Tropical peat is usually formed from woods, whereas boreal peat is composed of sphagnum and grasses. Due to their higher calorific values, tropical peat materials are more flammable than other fuels, especially when they are dry. III.

DESIGN OF WIRELESS SENSOR NETWORK FOR PEAT-FOREST MONITORING

A. System Architecture. Fig. 1 shows the concept design of WSN and UAV for ground monitoring system in Central Kalimantan, Indonesia. System architecture has 3 layers. Sensor network layer provides ground-sensing environment. Unmanned aerial vehicle (UAV) layer provides monitor of low altitude video surveillance, and satellite layer provides monitor the earth surface in different spectral bands of the visible, infrared and radar frequencies. For monitoring of peat-forest fires WSN consists of Crossbow IRIS motes and MTS400 sensor board [6]. It contains the following components: 1. Temperature, relative humidity, barometric pressure, and light sampling nodes. 2. Routing nodes for transmission of data from nodes to base station. 3. Base station connected to web server (Stargate net-bridge gateway). 4. Web server connected to a PostgreSQL database, which is queried by a browse-based client. 5. Client who receives data, temperature information, and fire alarm with Internet mobile technology. 6. Time synchronization scheduling is important for routing schema and power management in sparsely deployed network Satellit Layer

B. Sensor node. The MEMSIC ex-Crossbow IRIS mote is operated using TinyOS [7], which is specifically developed for programming small devices with embedded microcontroller. The main functions of the sensor nodes are communication, data processing, and sensor. In addition, TinyOS is programmed largely in the NesC, which supports a component-based, and eventdriven programming to build applications in the TinyOS platform. Collecting real-time data from WSN is important to understanding of the state of peat and forest environment and allows predictive analysis of fire extension. The sensor nodes include communication and sensor board module. Communication module of IRIS mote uses the IEEE 802.15.4 protocol, which has a 250kbps datarate. Transmission range is 500m-outdoors light of sight and 100m indoors when using a 1/4-wave dipole antenna and RF power 3 dBm. The parameters of hardware of this sensor node are summarized in Table I. TABLE I.

Sensor network Layer

Peat-forest

Figure 1. The system architecture monitoring peat forest fires.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

THE PARAMETERS AND HARDWARE INFORMATION ABOUT IRIS NODE.

Component

Processor Program flash memory Configuration EEPROM (data) Frequency Radio transceiver RF Power Receiver sensitivity Outdoor range Indoor Battery

Description

Atmel ATMega 128L 128K byte 4K byte 2400 Mhz-2480Mhz CC2430 3dBm -101 dBm 500m 100m 2 AA batteries

C. Gateway node. The gateway platform consists of Stargate NetBridge, with Linux operating system. The main function of the gateway is to serve as database server and web server. The MIB520 provides USB connectivity to the IRIS mote, which is the communication module. Web and database server store and data visualize data that is queried by a browser-based client, and can be connected to Internet used mobile technology. Fire alarm may receive by smart phone. The gateway manager node provides types of information for users to generate emergence report for abnormal event when extremely high temperature is detected. IV.

UAV Layer

ISSN: 2088-6578

DEPLOYMENT OF WSN AND UAV IN PEAT FOREST MONITORING

According to the conceptual architecture shown in Fig 1, optimal deployment strategy for sensor nodes must consider cost, number and location, as well as sensing radius and sensing accuracy of environment parameters. An important aspect of network reliability is the transmission range. In case of long range transmission in forest obstructions caused by tree and vegetation may reduce the transmission range [8]. Distance estimation is the key factor of wave signal propagation for optimal placement of sensor. Appropriate transmission power is essential for all nodes to have appropriate connectivity.

7

ISSN: 2088-6578

Yogyakarta, 12 July 2012

A. Deployment of sensor network layer Transmission range is important aspect on the placement of sensor network. Increase of coverage requirement enhances the accuracy of the sensed data. The technique of sparsely sensor deployment may result in long-range transmission and high-energy usage while densely sensor deployment may lead to short-range transmission and less energy consumption. The topology network must be designed so that energy can be saved while providing optimal multi-hop routing for efficient and reliable data delivery and link quality. Sensing coverage and network connectivity is two of the most fundamental problems in sparsely deployment; efficient node deployment strategies in wide area would minimize cost, and reduce computation and communication [9]. Fig. 2 shows sensor node network installation by manual deployment of sensor node positions at Taruna Jaya sites. The topology of sensor network generated border quality coverage wildfire monitoring. We consider are two factors (1) the quality of environment sensing, and (2) the amount of energy consumption. For the quality of sensing, we focus on the coverage of sensing of environment parameter. The resource we have to consider is energy. The multi-hop communication is exploited to relay sensed data from sensor nodes to base station. Hence nodes to base station have a priorities data packet communication load and thus consume more energy. Generally, wireless channel has the reputation of being unpredictable. The quality of wireless signal highly depends on the application, environments characteristics, and frequency spectrum used. Typical propagation environment consist of tree and vegetation, which act as obstacles in the radio communication and cause scattering and absorption [10]. The radio propagation in wireless communications experiences signal-strength loss due to distance, frequency, antenna gain, and power. The key factors that affect the signal in WSN are height from ground, signal distance, ground reflection, vegetation obstruction and diffraction, and antenna radiation pattern. Free space loss is widely used for an ideal propagation condition with no obstracles nearby to cause reflection or diffraction. It is suitable for predicting the signal strength at the receiving node when there is a clear line of sight (LOS) path between the transmitting and receiving nodes. The received signal power decreases with increasing distance between the transceivers. However, for obstructed paths, it is not adequate just to use free space loss model to predict the signal strength when the radio is near the ground. The antenna height and vegetation density are important parameters that affect communication networks coverage and connectivity. To maximize the connectivity, the antenna height must clear the Fresnel zone. Since the WSNs for wildfire monitoring are deployed in area that is difficult to access, the power source has to support the long-term operation of a sensor node. The sensor node operates on limited battery power. When it dies and disconnects from the network the performance of the application is significantly affected. The sensors can take turns to sleep and work creating a balance in the

8

CITEE 2012

energy consumption in order to maximize the WSN lifetime. Normal mode sensor node in each data active state works for 3s followed by is 7s-sleep, and thus the duty cycle is 0.3s.

Figure 2. Sensor network deployed at peat forest.

The main purpose of this WSN is to investigate the system ability to detect the fire and the robustness of the hardware in wildfire condition. The collected data are forwarded from sensor node to base station through prioritized packet routing. The base station processes the data and stores them in database. The client query of information and provide the user with captured data. If fire is detected by one of the nodes, the adjacent nodes start sensing the environmental change the increased duty cycle. B. Deployment of UAV layer For early wildfire detection there are vertical and horizontal technique. The horizontal technique using surveillance of tower with human vision and video based monitoring [10], and vertical technique such as remote sensing technique. For remote sensing we can use satellite, unmanned aerial vehicles (UAV) or aircraft. These methods are based on the pictures taked, which enable us to monitor any potential fire. UAVs have been already deployed after several disasters [11], installated camera was used to assess the situations. UAV flying up to 100 meters high takes pictures for ground information. In the case of wildfire verification and fire detection, it is important to have an accurate and up-to-date overview of situation. Hence, the observation areas covered by UAV provide for identification, control and verification of environmental conditions.

Figure 3. UAVs mission of observation area.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Figure 3 shows a UAV route to visit all picture takingpoints. This enable the planning of observation area, execution of mission and analysis of the video surveillance by UAV. During the UAV flight, we used infrared video and color camera to extract information of the covered area. The resulting image data from the UAV will be compared with the data from WSN to more accurately identify the fire hazard area. V.

ISSN: 2088-6578

The sampling rate could be tied to current environment state: high temperature with low humidity.

EXPERIMENT AND RESULT

For experiment and implementation, WSN and UAV have been deployed to detect and monitor the peat forest in Central Kalimantan, Indonesia. Figure 4 shows that the WSN includes 6 sensor nodes, gateway terminal and MoteView applications for monitoring. The distance between each sensor node is 100 meters, and the height of sensor nodes from ground is 1.5 meters. Through this system we were able to detect a small fire of about three meters in size. In our work, sensor nodes deployed in peat forest environment, where the height of vegetation in the Taruna Jaya area ranges from 1 to 3 meters from the ground, which will affect WSN signal strength and radio propagation. From the experiment, a small artificial fire was made for test the fire temperature sensor. The sensor will detect the change in temperature event of a fires, wind direction will determine the heat transfer from the flame. Visual data sensor nodes will display on the base-station, when there are change of temperatures.

Figure 5. Temperature data detection of fire.

Figure 6. Humidity data detection of fire.

In addition, Figure. 7 shows the transmission of the fire temperature measured at sensor nodes to the base station, in which locations of fire area are checked with the UAV video surveillance.

Figure 4. Flow of information fire WSN data.

From the experiment, we obtained fire temperature data in Taruna Jaya area as shown in Fig 5 and Fig 6, which can be described as follows: 1. Maximum absolute temperature : 64 ºC; 2. Minimum absolute temperature : 30 ºC; 3. Average peak temperature : 46 ºC; We assumed the above temperature for fire detection in Central Kalimantan. The minimum temperature to be regarded as fire is 45ºC, so that the temperature higher than 45ºC is considered potential fire. The sensor node operating high sensing mode when sensing and communication operating are detecting of potential fires.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 7. Comparison of data WSN with UAV photo.

The location of the fire is very importance to our fire patrols. To solve the problem, the sensor node should process to gain the knowledge of its physical location in

9

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

space. UAV video surveillance is used for the verification of alert massage from the sensor nodes as quick response of the fire detected by the sensor. Because combustion can occur at night, the light sensor and temperature are very useful information for the detection of wildfires at night. The accuracy and reliability of combination data WSN and UAV are support largely impact to peat fire detection. Sensor node can provide constant monitoring by low power consumption during the fire season. And large scale, UAV can be verification used as provide monitoring in fully smoke conditions. VI.

CONCLUSION

We have an integration of system for the monitor of peat fires using wireless sensor network and the UAV, where a small fire is not detected by the satellite or in the dense smoke conditions. Collecting real-time data from WSN and UAV is the best strategy for monitoring peat-forest fire disasters. Monitoring wildfire system uses WSN containing the smart sensors to collect environment data such as temperature, humidity, and barometric pressure, and to deliver useful information to fire-patrols or remote monitors. One way to verify the location of wildfire is to use UAV to collect more accurate information. ACKNOWLEDGMENT The JST/JICA Project funded government supported this work.

by

the

Japanese

REFERENCES [1]

A. Usup, Y. H. Ashimoto, H. Takahashi, and H. Hayasaka, “Combustion and thermal characteristics of peat fire in tropical peatland in Central Kalimantan , Indonesia,” Biomass, vol. 14, no. 1, pp. 1-19, 2004. [2] M. E. Harrison and S. E. Page, “The global impact of Indonesian forest fires,” Biologist, vol. 56, no. 3, pp. 156-163, 2009. [3] J. Yick, B. Mukherjee, and D. Ghosal, “Wireless sensor network survey,” Computer Networks, vol. 52, no. 12, pp. 2292-2330, Aug. 2008. [4] M. Hefeeda, “Forest Fire Modeling and Early Detection using Wireless Sensor Networks,” vol. V, 2003. [5] H. Segah, H. Tani, and T. Hirano, “Detection of fire impact and vegetation recovery over tropical peat swamp forest by satellite data and ground-based NDVI instrument,” International Journal of Remote Sensing, vol. 31, no. 20, pp. 5297-5314, Oct. 2010. [6] MTS420/400 Environment sensor board. http://www.memsic.com /support/documentation.html [7] TinyOS: a component-based OS for the networked sensor network. http://tinyos.net [8] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “Wireless sensor networks : a survey,” Computer Networks, vol. 38, pp. 393-422, 2002. [9] T. Stoyanova, F. Kerasiotis, A. Prayati, and G. Papadopoulos, “A Practical RF Propagation Model for Wireless Network Sensors,” 2009 Third International Conference on Sensor Technologies and Applications, pp. 194-199, Jun. 2009. [10] A. Somov, “Wildfire safety with wireless sensor networks,” vol. 11, no. December, 2011. [11] M. Quaritsch, K. Kruggl, S. Bhattacharya, M. Shah, and B. Rinner, “Networked UAVs as aerial sensor network for disaster management applications,” Informationstechnik, pp. 56-63, 2010.

10

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Data Mining Application to Reduce Dropout Rate of Engineering Students Munindra Kumar Singh1, Brijesh Kumar Bharadwaj2, Saurabh Pal3 Department of Computer Applications, VBS Purvanchal University, Jaunpur 2 Dr. R. M. L. Awadh University, Faizabad 1 [email protected] 2 [email protected] 3 [email protected]

1,3

Abstract— In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student’s dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data. Keywords: Educational Data Mining, Dropout Management, Predictive Models

I.

mining, and causal data mining), distillation of data for human judgment, and discovery with models [6]. Moreover, EDM can solve many problems based on educational domain. Data mining is non-trivial extraction of implicit, previously unknown and potentially useful information from large amounts of data. It is used to predict the future trends from the knowledge pattern. The main objective of this paper is to use data mining methodologies to find students which are likely to drop out their first year of engineering. In this research, the classification task is used to evaluate previous year’s student dropout data and as there are many approaches that are used for data classification, the Bayesian classification method is used here. Information like marks in High School, marks in Senior Secondary, students family position etc. were collected from the student’s management system, to predict list of students who need special attention.

INTRODUCTION

One of the biggest challenges that higher education faces is to improve student dropout rate. Student dropout is a challenging task in higher education [1] and it is reported that about one fourth of students dropped college after their first year [1-3]. Student dropout has become an indication of academic performance and enrolment management. Recent study results show that intervention programs can have significant effects on dropout, especially for the first year. To effectively utilize the limited support resources for the intervention programs, it is desirable to identify in advance students who tend to need the support most. In this paper, we describe the experiments and the results from a data mining techniques for the students of Institute of Engineering and Technology of VBS Purvanchal University, Jaunpur to assist the student dropout program on campus. In our study, we apply machine learning algorithm to analyse and extract information from existing student data to establish predictive model. The predictive model is then used to identify among new incoming first year students those who are most likely to benefit from the support of the student dropout program. Data mining combines machine learning, statistics and visualization techniques to discover and extract knowledge. Educational Data Mining (EDM) carries out tasks such as prediction (classification, regression), clustering, relationship mining (association, correlation, sequential

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

II. BAYESIAN CLASSIFICATION The Naïve Bayes Classifier technique is particularly suited when the dimensionality of the inputs is high. Despite its simplicity, Naive Bayes can often outperform more sophisticated classification methods. Naïve Bayes model identifies the characteristics of dropout students. It shows the probability of each input attribute for the predictable state. A Naive Bayesian classifier [21] is a simple probabilistic classifier based on applying Bayesian theorem (from Bayesian statistics) with strong (naive) independence assumptions. By the use of Bayesian theorem we can write

We preferred Naive Bayes implementation because: • Simple and trained on whole (weighted) training data • Over-fitting (small subsets of training data) protection • Claim that boosting “never over-fits” could not be maintained. • Complex resulting classifier can be determined reliably from limited amount of data

11

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Surjeet Kumar Yadav et. al.. / KKU-IENC 2012, Thailand, May 10-12, 2012

III.

BACKGROUND AND RELATED WORK

Tinto [2] developed the most popular model of retention studies. According to Tinto’s Model, withdrawal process depends on how students interact with the social and academic environment of the institution. To understand the factors influencing university student retention, Superby et. al. [9] used questionnaires to collect data including personal history of the student, implication of student behaviour and perceptions of the student. The authors applied different approaches such as decision tree, random forests, neural networks, and linear discriminate analysis to their questionnaires. However, possibly because of the small sample size, the prediction accuracy is not very good. A number of Open Distance Learning institutions have carried out dropout studies. Some notable studies have been undertaken by the British Open University (Ashby [10]; Kennedy & Powell [11]). Different models have been used by these researchers to describe the factors found to influence student achievement, course completion rates, and withdrawal, along with the relationships between variable factors. Yadav, Bharadwaj and Pal [12] conducted study on the student retention based by selecting 398 students from MCA course of VBS Purvanchal University, Jaunpur, India. By means of classification they show that student’s graduation stream and grade in graduation play important role in retention. Khan [13] conducted a performance study on 400 students comprising 200 boys and 200 girls selected from the senior secondary school of Aligarh Muslim University, Aligarh, India with a main objective to establish the prognostic value of different measures of cognition, personality and demographic variables for success at higher secondary level in science stream. The selection was based on cluster sampling technique in which the entire population of interest was divided into groups, or clusters, and a random sample of these clusters was selected for further analyses. It was found that girls with high socio-economic status had relatively higher academic achievement in science stream and boys with low socio-economic status had relatively higher academic achievement in general. Pandey and Pal [14] conducted study on the student performance based by selecting 60 students from a degree college of Dr. R. M. L. Awadh University, Faizabad, India. By means of association rule they find the interestingness of student in opting class teaching language. Ayesha, Mustafa, Sattar and Khan [15] describe the use of k-means clustering algorithm to predict student’s learning activities. The information generated after the implementation of data mining technique may be helpful for instructor as well as for students.

12

Bharadwaj and Pal [16] obtained the university students data like attendance, class test, seminar and assignment marks from the students’ previous database, to predict the performance at the end of the semester. Bray [17], in his study on private tutoring and its implications, observed that the percentage of students receiving private tutoring in India was relatively higher than in Malaysia, Singapore, Japan, China and Sri Lanka. It was also observed that there was an enhancement of academic performance with the intensity of private tutoring and this variation of intensity of private tutoring depends on the collective factor namely socio-economic conditions. Bhardwaj and Pal [18] conducted study on the student performance based by selecting 300 students from 5 different degree college conducting BCA (Bachelor of Computer Application) course of Dr. R. M. L. Awadh University, Faizabad, India. By means of Bayesian classification method on 17 attributes, it was found that the factors like students’ grade in senior secondary exam, living location, medium of teaching, mother’s qualification, students other habit, family annual income and student’s family status were highly correlated with the student academic performance. Yadav, Bharadwaj and Pal [19] obtained the university students data like attendance, class test, seminar and assignment marks from the students’ database, to predict the performance at the end of the semester using three algorithms ID3, C4.5 and CART and shows that CART is the best algorithm for classification of data. IV. DATA MINING PROCESS Knowing the reasons for dropout of student can help the teachers and administrators to take necessary actions so that the success percentage can be improved. Predicting the academic outcome of a student needs a lot of parameters to be considered. Prediction models that include all personal, social, psychological and other environmental variables are necessitated for the effective prediction of the performance of the students. A. Data Preparations The data set used in this study was obtained from VBS Purvanchal University, Jaunpur (Uttar Pradesh) on the sampling method for Institute of Engineering and Technology for session 2010. Initially size of the data is 165. B. Data selection and Transformation In this step only those fields were selected which were required for data mining. A few derived variables were selected. While some of the information for the variables was extracted from the database. All the predictor and response variables which were derived from the database are given in Table I for reference.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

TABLE I



HSG - Students grade in High School education. Students who are in state board appear for six subjects each carry 100 marks. Grade are assigned to all students using following mapping O – 90% to 100%, A – 80% 89%, B – 70% - 79%, C – 60% - 69%, D – 50% - 59%, E – 40% - 49%, and F - < 40%.



SSG - Students grade in Senior Secondary education. Students who are in state board appear for five subjects each carry 100 marks. Grade are assigned to all students using following mapping O – 90% to 100%, A – 80% 89%, B – 70% - 79%, C – 60% - 69%, D – 50% - 59%, E – 40% - 49%, and F - < 40%.



Atype - The admission type which may be through Uttar Pradesh State Entrance Examination (UPSEE) or Direct admission through University procedure.



Med – This paper study covers only the colleges of Uttar Pradesh state of India. Here, medium of instructions are Hindi or English.



FSize-. According to population statistics of India, the average number of children in a family is 3.1. Therefore, the possible range of values is from 1, 2, 3 or >3.



Dropout – Dropout condition. Whether the student continues or not after one year. Possible values are Yes if student continues study and No if student dropped the study after one year.

STUDENT RELATED VARIABLES

Variables Branch Sex

Description Students Branch Students Sex

Cat

Students category

HSG

Students grade in High School

SSG

Students grade in Senior Secondary

Atype Med

Admission Type Medium of Teaching

LLoc

Living Location of Student

Hos FSize FStat

Student live in hostel or not student’s family size Students family status

FAIn

Family annual income status

FQual

MQual

FOcc

Fathers qualification

Mother’s Qualification

Father’s Occupation

MOcc

Mother’s Occupation

Dropout

Dropout: Continue to enroll or not after one year

Possible Values {CS, IT, ME) {Male, Female} {Unreserved, OBC, SC, ST} {O – 90% -100%, A – 80% - 89%, B – 70% - 79%, C – 60% - 69%, D – 50% - 59%, E – 40% - 49%, F - < 40%} {O – 90% -100%, A – 80% - 89%, B – 70% - 79%, C – 60% - 69%, D – 50% - 59%, E – 40% - 49%, F - < 40% } {UPSEE, Direct} {Hindi, English} {Village, Town, Tahseel, District} {Yes, No} {1, 2, 3, >3} {Joint, Individual} {BPL, poor, medium, high} {no-education, elementary, secondary, UG, PG, Ph.D., NA} {no-education, elementary, secondary, UG, PG, Ph.D., NA} {Service, Business, Agriculture, Retired, NA} {House-wife (HW), Service, Retired, NA} {Yes, No}

The domain values for some of the variables were defined for the present investigation as follows: •



Branch – The courses offered by VBS Purvanchal University, Jaunpur are Computer Science and Engineering (CSE), Information Technology (IT) and Mechanical Engineering (ME). Cat – From ancient time Indians are divided in many categories. These factors play a direct and indirect role in the daily lives including the education of young people. Admission process in India also includes different percentage of seats reserved for different categories. In terms of social status, the Indian population is grouped into four categories: Unreserved, Other Backward Class (OBC), Scheduled Castes (SC) and Scheduled Tribes (ST). Possible values are Unreserved, OBC, SC and ST.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

C. Implementation of Mining Model Weka is open source software that implements a large collection of machine leaning algorithms and is widely used in data mining applications. From the above data, drop.arff file was created. This file was loaded into WEKA explorer. The classify panel enables the user to apply classification and regression algorithms to the resulting dataset, to estimate the accuracy of the resulting predictive model, and to visualize erroneous predictions, or the model itself. The algorithm used for classification is Naive Bayes. Under the "Test options", the 10-fold cross-validation is selected as our evaluation approach. Since there is no separate evaluation data set, this is necessary to get a reasonable idea of accuracy of the generated model. This predictive model provides way to predict whether a new student will continue to enroll or not after one year. D. Results and Discussion In the present study, those variables whose probability values were greater than 0.50 were given due considerations and the highly influencing variables with high probability values have been shown in Table 2. These values are calculated in the case of dropout student, who dropped the study after first year of engineering. These features were used for prediction model construction.

13

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Surjeet Kumar Yadav et. al.. / KKU-IENC 2012, Thailand, May 10-12, 2012 TABLE III HIGH POTENTIAL VARIABLES Variable

Values

Probability

Sex

Male

0.68

SSG

E

0.6623

Atype

Direct

0.6

Med

Hindi

0.76

Lloc

Village

0.55

Mqual

Elementary

0.50

Mocc

Service

.52

The empirical results show that we can produce short but accurate prediction of attributes for the student dropout purpose by applying the Naïve Bayes classification model to the records of incoming new students. This study will also work to identify those students which needed special attention to reduce drop-out rate. REFERENCES [1]

[2] [3]

Table II indicates the most affective attributes were: Med, Sex, SSG, Atype, LLoc, Mocc and Mqual. Hints could be extracted from the table indicates that the students with Med = ‘Hindi’ are not continue their study. Wherever Sex is taken into consideration, Male students have greater possibility of discontinuation of study than Female. The classification matrix has been presented in Table III, which compared the actual and predicted classifications. In addition, the classification accuracy for the two class outcome categories was presented.

[4] [5] [6] [7]

[8]

TABLE III: CLASSIFICATION MATRIX-ID3 PREDICTION MODEL

Dropout

Actual

Predicted Yes

No

Yes

121

10

No

11

23

The class wise accuracy is shown in Table IV

[9]

[10]

[11] [12]

TABLE IV: CLASS WISE ACCURACY

Dropout

Precision

Recall

Yes

0.917

0.924

No

0.697

0.676

[14]

V. CONCLUSIONS

[15]

Predicting studentsಬ dropout rate is great concern to the higher education system. Recently data mining can be used

[16]

in a higher educational system to predict the list of studentsಬ who are going to drop their study. In this paper, we have introduced the Naïve Bayes classification algorithm for Student’s dropout management in Higher Education system, to predict the relevancy of the incoming item from a new data set to the already existing data sets.

14

[13]

[17]

[18]

V. Tinto, “Research and practice of student retention: What next, College Student Retention: Research”, Theory, and Practice, 8(1), 120, 2006. V. Tinto, “Leaving College: Rethinking the cause and cure of student attrition”. Chicago: University of Chicago Press, 1993. V. Tinto, “Dropout from Higher Education: A theatrical synthesis of recent research”. Review of Education Research, 45, 89-125, 1975. J. Han and M. Kamber, “Data Mining: Concepts and Techniques,” Morgan Kaufmann, 2000. I. H. Witten, E. Frank, M.A. Hall,“Data Mining: Practical Machine Learning Tools and Techniques”, 3rd Ed. Morgan Kaufmann, 2011. C. Romero and S. Ventura, “Educational Data Mining: A survey from 1995 to 2005” expert system with Application 33(2007) 135-146. Yoav Freund and Llew Mason, “The Alternating Decision Tree Algorithm”. Proceedings of the 16th International Conference on Machine Learning, pages 124-133 (1999). Bernhard Pfahringer, Geoffrey Holmes and Richard Kirkby. “Optimizing the Induction of Alternating Decision Trees”. Proceedings of the Fifth Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. 2001, pp. 477-487. J. F. Superby, J. P. Vandamme, N. Meskens, “Determination of factors influencing the achievement of the first-year university students using data mining methods. Workshop on Educationa, 2006. A. Ashby,“Monitoring Student Retention in the Open University: Detritions, measurement, interpretation and action”. Open Learning, 19(1), 65-78, 2004. D. Kennedy, and R. Powell, “Student progress and withdrawal in the Open University”. Teaching at a Distance, 7, 61-78, 1976. S. K. Yadav, B. K. Bharadwaj and S. Pal, “Mining Educational Data to Predict Student’s Retention: A Comparative Study”, International Journal of Computer Science and Information Security (IJCSIS), Vol. 10, No. 2, Feb 2012, pp 113-117. Z. N. Khan, “Scholastic achievement of higher secondary students in science stream”, Journal of Social Sciences, Vol. 1, No. 2, pp. 84-87, 2005. U. K. Pandey, and S. Pal, “A Data mining view on class room teaching language”, (IJCSI) International Journal of Computer Science Issue, Vol. 8, Issue 2, pp. 277-282, ISSN:1694-0814, 2011. Shaeela Ayesha, Tasleem Mustafa, Ahsan Raza Sattar, M. Inayat Khan, “Data mining model for higher education system”, Europen Journal of Scientific Research, Vol.43, No.1, pp.24-29, 2010. M. Bray, The shadow education system: private tutoring and its implications for planners, (2nd ed.), UNESCO, PARIS, France, 2007. B.K. Bharadwaj and S. Pal. “Mining Educational Data to Analyze Students’ Performance”, International Journal of Advance Computer Science and Applications (IJACSA), Vol. 2, No. 6, pp. 63-69, 2011. S. K. Yadav, B.K. Bharadwaj and S. Pal, “Data Mining Applications: A comparative study for Predicting Student’s Performance”, International Journal of Innovative Technology and Creative Engineering (IJITCE), Vol. 1, No. 12, pp. 13-19, 2011.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

8VDELOLW\6WDQGDUGDQG0RELOH3KRQH8VDJH,QYHVWLJDWLRQ IRUWKH(OGHUO\  9&KRQJVXSKDMDLVLGGKL99DQLMMDDQG2&KRWFKXDQJ   6FKRRORI,QIRUPDWLRQ7HFKQRORJ\ .LQJ0RQJNXW V8QLYHUVLW\RI7HFKQRORJ\7KRQEXUL7XQJNUX%DQJNRN7KDLODQG (PDLOV^YLWKLGDYDFKHHRUDFKDVUL`#VLWNPXWWDFWK   $EVWUDFW²,Q UHFHQW \HDUV VHYHUDO FRXQWULHV KDYH EHHQ FRQFHUQHG DERXW WKHLU DJLQJ VRFLHW\ GXH WR WKH ZRUOGZLGH WUHQG RI D UDSLGO\ LQFUHDVLQJ DJLQJ SRSXODWLRQ 7KH LQFUHDVLQJ QXPEHU RI HOGHUO\ SHRSOH KDV LQIOXHQFHG WKH VRFLDO VWUXFWXUH RIWKH ZRUOG VXFKWKDW LW LVQRZ DQ DJLQJ VRFLHW\ $W WKH VDPH WLPH PRELOH SKRQHV KDYH EHFRPH DQ LQWHJUDOSDUWRIHOGHUO\OLIHEHFDXVHRIWKHLUQHHGWRFRQQHFW WR HYHU\ GLPHQVLRQ RI GDLO\ OLIH WR IXOILOO WKHLU EDVLF UHTXLUHPHQWVEXWWKH\KDYHSK\VLFDOOLPLWDWLRQVZKHQXVLQJ PRELOHDSSOLFDWLRQVZKLFKDUHGHVLJQHGIRUJHQHUDOXVHUV ,Q WKLV SDSHU ZH KDYH VWXGLHG VHYHUDO XVDELOLW\ VWDQGDUGVDQG LQYHVWLJDWHGWKHOLPLWDWLRQVRIWKHHOGHUO\ LQ WKHLU PRELOH DSSOLFDWLRQ XVDJH LQ RUGHU WR LGHQWLI\ LVVXHV ZKLFK PRELOH DSSOLFDWLRQ GHYHORSHUV VKRXOG EH FRQFHUQHG 7KH PRGHO SURSRVHV FRQFHSWXDO XVDELOLW\ WR VXSSRUW WKH OLPLWDWLRQVZKLFKWKH HOGHUO\IDFHZKHQXVLQJPRELOH SKRQH DSSOLFDWLRQV DQG OHDGV WR DQ LPSURYHG PRELOH DSSOLFDWLRQ GHYHORSLQJSURFHVV 

SKRQHV DUH GLIILFXOW WR VHH LW ZRXOG EH SRVVLEOH UHDVRQ DJLQJ SHRSOH DYRLG XVLQJ WKHP 0RUHRYHU VPDOO EXWWRQV VRPHWLPHV FDXVH SUREOHPV W\SLQJ DQG ILOOLQJ LQ LQIRUPDWLRQ WRJHWKHU ZLWK XQFOHDU LQVWUXFWLRQV DQG GLVRUGHUHGSRVLWLRQVRIPHQXLWHPVPDNHWKHHOGHUO\WLUHG RI XVLQJ WKH GHYLFHV 7KH LQVWUXFWLRQV ZKLFK DUH QRW XQGHUVWDQGDEOH PD\ FDXVH WKH HOGHUO\ WR EHFRPH FRQIXVHG +RZHYHU WKH HOGHUO\ VWLOO QHHG WR FRQQHFW WR HYHU\GLPHQVLRQRIOLYLQJOLNH\RXQJSHRSOHEHFDXVH WKH\ ZRXOG OLNH WR IXOILOO WKHLU EDVLF UHTXLUHPHQWVZKHWKHU KHDOWK FDUH UHFUHDWLRQ ILQDQFH VKRSSLQJ YHKLFOHV DQG VHUYLFHV $OVR WKH\ ZRXOG OLNH WR KDYH D GLIIHUHQW DSSURDFK WR XVLQJ D PRELOH SKRQH DSSOLFDWLRQ IURP WKH \RXQJSHRSOH>@>@ )XUWKHUPRUH WKH GHVLJQ RI PRELOH SKRQHV LV QRW DSSURSULDWH IRU HOGHUO\ EHFDXVH DJLQJ FKDQJLQJ IDFWRUV DUHQRWXVXDOO\FRQVLGHUHGLQWKHGHVLJQSKDVH>@>@ 7KH GHVLJQ JDS LQKHUHQW LQ PRELOH SKRQH DSSOLFDWLRQV .H\ZRUGVFRPSRQHQW(OGHUO\8VDELOLW\0RELOHSKRQH LV ULVN\ IRU ERWK XVHUV DQG PRELOH SKRQH PDQXIDFWXUHUV )RU UHGXFLQJ WKH JDS VL]H VSHFLDO DWWHQWLRQ WR LQVXUH WKH  SURSHUGHVLJQ DQGIXQFWLRQDO FRPSRQHQWVDUH VXLWDEOH IRU WKH XVHUV DQG IRU PDNLQJ WKH SURGXFWV EHQHILFLDO WR WKH ,  ,1752'8&7,21 XVHUVLVQHFHVVDU\ ,Q UHFHQW \HDUV SHRSOH DURXQG WKH ZRUOG KDYH EHFRPH 7KH DLP RI WKLV UHVHDUFK LV WR XQGHUVWDQG WKH HOGHUO\ PRUH DZDUH RI RXU DJLQJ VRFLHW\ GXH WR WKH LQFUHDVLQJ FKDQJH IDFWRUV WKDW LQIOXHQFH PRELOH SKRQH DSSOLFDWLRQ QXPEHURIWKHHOGHUO\SRSXODWLRQ>@>@>@>@7KLVFDQ XVDJH DQG DOVR SURSRVH D FRQFHSWXDO XVDELOLW\ PHWULF IRU EH VHHQ LQ WKH GHPRJUDSKLF VWUXFWXUH RI PDQ\ FRXQWULHV WKH HOGHUO\ ZKR XVH PRELOH SKRQH DSSOLFDWLRQV 7KH $FFRUGLQJ WR WKH VWDWLVWLFV RI WKH 8QLWHG 1DWLRQV 81  PRGHO ZKLFK ZH SURSRVH KDV EHHQ FRQVLGHUHG IURP D >@ WKHVH GHPRJUDSKLF FKDQJHV ZLOO DIIHFW WKH VRFLDO XVDELOLW\ VWDQGDUG >@ >@ >@ >@ DQG WKH DJLQJ VWUXFWXUH RI WKH ZRUOG LQWR DQ DJLQJ VRFLHW\ >@ IDFWRUVLQFOXGLQJWKHPDLQW\SHRIFKDQJHV LQWKHHOGHUO\ $FFRUGLQJ WR VHYHUDO VWXGLHV PDQ\ FRXQWULHV KDYH SK\VLFDO FKDQJHV VHQVRU\ FKDQJHV DQG WKH FRJQLWLYH SUHSDUHGDQGDOORFDWHGDORWRIWKHLUEXGJHWDQGUHVRXUFHV FKDQJHV>@>@>@>@ WR WKHLU DJLQJ VRFLHW\ GHYHORSPHQW SODQV 7KLV KDV EHHQ 7KLVSDSHULVRUJDQL]HGDVIROORZV>6HFWLRQ@GHILQHV LQGLFDWHG LQ VHYHUDO ZRUNV >@ >@ >@ (QFRXUDJLQJ WKH WKHHOGHUO\DQGVXPPDUL]HV HOGHUO\FKDQJHV 7KLV VHFWLRQ XVH RI WHFKQRORJ\ LV RQH RI WKH ZD\V WR LPSURYH WKH DOVRDGGUHVVHVWKHTXHVWLRQRIZK\PRELOHSKRQHVZLOOEH TXDOLW\RIOLIHRIWKHHOGHUO\>@>@ DQ HVVHQWLDO SDUW RI OLYLQJ DQG DOVR GHVFULEHV HOGHUO\ $W SUHVHQW ZH FDQQRW GHQ\ WKDW PDQ\ WHFKQRORJLHV SUREOHPV >6HFWLRQ @ SURYLGHV WKH XVDELOLW\ GHILQLWLRQ KDYH EHFRPH DQ HVVHQWLDO SDUW RI WRGD\¶V OLIH HVSHFLDOO\ 7KH VHFWLRQ VWDUWV ZLWK D XVDELOLW\ GHILQLWLRQ DQG PRELOH SKRQHV EHFDXVH WKH\ PDNH OLYLQJ HDVLHU DQG FRQWLQXHV ZLWK XVDELOLW\ IDFWRUV IURP VHYHUDO UHVHDUFKHUV EULGJH WKH GLVWDQFHV 0RUHRYHU PRELOH SKRQHV FDQ DQG FDWHJRUL]HV GXSOLFDWHG XVDELOLW\ IDFWRUV >6HFWLRQ @ LPSURYH WKH KHDOWK FRPPXQLFDWLRQ OHLVXUH DQG SURSRVHV DSSURSULDWH PRELOH SKRQH DSSOLFDWLRQV IRU WKH HQYLURQPHQW RI HOGHUO\ SHRSOH >@>@ >@ >@ +RZHYHU HOGHUO\  >6HFWLRQ @ FRQFOXGHV DOO RI WKH UHVHDUFK PRELOH SKRQHV XVDJH LQ HOGHUO\ SHRSOH RI DJH  DQG GLVFXVVHG DQG DOVR H[SRXQGV PRELOH SKRQH DGRSWLRQ RI DERYH LV RQO\  EDVHG RQ WKH ZRUOG¶V SRSXODWLRQ WKHHOGHUO\ VWDWLVWLFV RI WKRVH ZKR RZQ D PRELOH SKRQH  >@ 7KH UHDVRQ WKDW DGRSWLRQ RI PRELOH SKRQH LQ WKH HOGHUO\ LV YHU\ORZLVEHFDXVHWKHLQLWLDOXVH RIWKHPRELOHSKRQH LV YHU\ GLIILFXOW >@ >@ 7KH VPDOO VFUHHQV RQ PRELOH

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

15

ISSN: 2088-6578

,,

Yogyakarta, 12 July 2012

(/'(5/< $1' 7+(,5 352%/(06

,QWKLVVHFWLRQWKHWHUPHOGHUO\LVGHILQHGE\WKHGHWDLOV RI WKHLU SK\VLFDO FKDQJHV DQG PHQWDO DELOLWLHV $OVR WKH VHFWLRQZLOOGLVFXVVKRZ PRELOHSKRQHV FDQEH LPSURYHG IRUWKHHOGHUO\OLIHVW\OH 1RZDGD\VWKHUHDUHVHYHUDOVWXGLHVFRQFHUQHGZLWKWKH GDLO\ OLIH RI SHRSOH DV WKH\ DJH :KHQ SHRSOH JHW ROGHU WKH\ FKDQJH LQ D P\ULDG RI ZD\V ERWK ELRORJLFDOO\ DQG SV\FKRORJLFDOO\>@>@>@>@ 7R GHILQH WKH WHUP HOGHUO\ VHYHUDO VWXGLHV FRQVLGHU LW EDVHG RQ DJH DQG SK\VLFDO FKDQJHV >@ >@ $JH LV D VXUURJDWH YDULDEOH ZKLFK ORRVHO\ SUHGLFWV WKH DPRXQW RI HOGHUO\ GHFOLQH >@ $OWKRXJK FXUUHQWO\ WKHUH LV QR 8QLWHG1DWLRQV VWDQGDUGQXPHULFDO FULWHULRQ WKH 81 KDV DJUHHG WKDW  \HDUV DQG DERYH UHIHUV WR WKH HOGHUO\ SRSXODWLRQ 6HYHUDO ZHVWHUQ FRXQWULHV KDYH FKRVHQ WKH FKURQRORJLFDO DJH RI  \HDUV >@ ZKHUHDV HOGHUO\ SK\VLFDOFKDQJHVRIWKHERG\FDQEHVHHQDVVNLQFKDQJHV KDLU FRORU FKDQJHV DQG LQFOXGHG GHWHULRUDWLRQ VWUHQJWK DQGGHFUHDVHGGH[WHULW\ 6HYHUDO VWXGLHV KDYH FDWHJRUL]HG WKH HOGHUO\ FKDQJH SDWWHUQ LQWR WKUHH W\SHV ZKLFK DUH SK\VLFDO FKDQJHV VHQVRU\ FKDQJHV DQG FRJQLWLYH FKDQJHV >@ >@ >@ >@>@ $ 3K\VLFDO&KDQJHV 7KH PRWRU V\VWHP LV SDUW RI WKH FHQWUDO QHUYRXV V\VWHP ZKLFK FRQWUROV DQG GLUHFWV PRYHPHQW  3K\VLFDO FKDQJHV LQ WKH FHQWUDO QHUYRXV V\VWHP DIIHFW WR D ERG\ PRWRU IXQFWLRQ 7KLV SUREOHP LQIOXHQFHV WR RSHUDWH WKH DFWLYLW\ GHFOLQHG ,W LV LQVLGH DIIHFWHG LQ ERQH DQG PXVFOH WKDW FDXVHV IURP WKH SDUW RI WKH QHUYRXV V\VWHP WKDW FRQWURO DQG UHJXODWHV WKH PRYHPHQWV KDYH FKDQJHG 0RUHRYHU SK\VLFDO FKDQJHV LQFOXGH WKH FRQQHFWLQJ OLJDPHQWV FKDQJLQJ GHWHULRUDWLQJ DQG MRLQWV VWLIIHQLQJ EHWZHHQ ERQHV 2OGHU SHRSOH PD\ JLYH XS VRPH HQMR\DEOH DFWLYLWLHVDVWKH\ORVHWKHLUHODVWLFLW\DQGIHHOPXVFOHDQG MRLQWSDLQ % 6HQVRU\&KDQJHV 7KH VHQVRU\ V\VWHP LV WKH SDUW RI WKH QHUYRXV V\VWHP FRQVLVWLQJ RI VHQVRU\ UHFHSWRUV WKDW UHFHLYH VWLPXOL IURP WKH LQWHUQDO DQG H[WHUQDO HQYLURQPHQW 1HXUDO SDWKZD\V WKHQWUDQVPLWVWLPXOLLQIRUPDWLRQ WRWKHEUDLQDQGVHYHUDO DUHDV RI WKH EUDLQ ZKLFK SURFHVV WKH LQIRUPDWLRQ 7KH LQIRUPDWLRQ LV FDOOHG VHQVRU\ LQIRUPDWLRQ DQG LW PD\ RU PD\ QRW OHDG WR FRQVFLRXV DZDUHQHVV ,I WKH SHUVRQ SHUFHLYHV LW WKHQ LW FDQ EH FDOOHG D VHQVDWLRQ :KHQ DJLQJSHRSOHPD\KDYHSUREOHPVZLWKWKHLUSHUFHSWLRQV 7KH VHQVRU\ GHFOLQH EHFRPHV D SUREOHP GXH WR SHUFHSWLRQ FKDQJHV 7KH VHQVRU\ ORVV OHDGV WR GHWHULRUDWLRQ DQG GLIILFXOW\ RI XQGHUVWDQGLQJ LQ VHQVRU\ LQIRUPDWLRQSURFHVVLQJ,PSRUWDQWSHUFHSWLRQFKDQJHVDUH YLVLRQDQGDXGLWRU\DELOLW\   9LVLRQ'HFOLQHG 9LVLRQ GHFOLQH UDQJHV IURP D PLQRU ORVV WR FRPSOHWH EOLQGQHVV7RJHWKHUZLWKDJLQJWKHGHWHULRUDWLRQRIYLVXDO SHUFHSWLRQ LV D FDXVH RI GLPLQLVKHG YLVXDO HIILFLHQF\ 2FXODU GHWHULRUDWLRQ UHGXFHV WKH H\HVLJKW DQG UHVXOWV LQ SDUWLDOWRWRWDOYLVXDOLPSDLUPHQW7KHHIIHFWVRQH\HVLJKW

16

CITEE 2012

EHFDXVH RI DJLQJ FDQ EH FDWHJRUL]HG LQWR VHYHUDO W\SHV 6RPHRIWKHPDUHPHQWLRQHGEHORZ x 9LVXDO DFXLW\ 9LVXDO DFXLW\ QRUPDOO\ VWDUWV GHFUHDVLQJDIWHUWKHDJH RI\HDUV DQGLV GHILQHGDVWKH DELOLW\ WR GLIIHUHQWLDWH REMHFWV 0RVW HOGHUO\ QHHG WKUHH WLPHV PRUH OLJKW WR UHDFK WKH YLVXDO OHYHO RI DQ DYHUDJH \RXQJPDQ x &RQWUDVW VHQVLWLYLW\ WKH GHFUHDVLQJ RI FRQWUDVW VHQVLWLYLW\ UHVXOWV LQ UHGXFLQJ WKH DELOLW\ WR GLIIHUHQWLDWH EHWZHHQWKH GDUN DQG OLJKW ,Q IDFWUHGXFWLRQ RI FRQWUDVW VHQVLWLYLW\DIIHFWVSHRSOHEHWZHHQDQG\HDUVRIDJH KRZHYHU WKH QRUPDO GHWHULRUDWLRQ RI FRQWUDVW VHQVLWLYLW\ EHJLQVDW\HDUVRIDJHDQGHQGVDWWKHDJHRI x )RFXV RQ REMHFWVWKH DELOLW\ WRIRFXV RQ REMHFWV VWDUWV GHFUHDVLQJ IURP DQ\ DJH EHWZHHQ  WR  \HDUV ,Q WKLVFDVHWKHSHUVRQVXIIHUVEHFDXVHWKHIRFXVSRLQWLVQRW RQ WKH UHWLQD VR WKH\ FDQQRW IRFXV RQ HLWKHU QHDU RU IDU REMHFWVGHSHQGLQJRQWKHLUH\HVLJKW¶VIRFDOOHQJWK )RU YLVXDOL]DWLRQ SUREOHPV HIIHFWV RI JODUH DIWHUPDWKVDUHLQFUHDVHGGXHWRVFDWWHUHGOLJKWLQWKHH\H PRUHRYHUOHQVRSDFLW\LVLQFUHDVHG)RUDJLQJSHRSOHWKH DPRXQW RI OLJKW UHTXLUHG LV DERXW WKUHH WLPHV PRUH IRU DFXLW\ RI YLVLRQ 7KHUHIRUH WKH JODUH SURVSHFW PXVW EH FRQVLGHUHGLQGHVLJQ   $XGLWRU\GHFOLQHG $V SHRSOH DJH WKHLU DXGLWRU\ DELOLW\ GHFOLQHV VR OLVWHQLQJ FDSDELOLWLHV GHWHULRUDWH ZLWK DJH 7KLV KHDULQJ LPSDLUPHQW FDQ DIIHFW WKH DELOLW\ WR GLVWLQJXLVK VRXQG IUHTXHQFLHV DQG FDXVH KHDULQJ ORVV :KHQ SHRSOH UHDFK \HDUV RI DJH PRVW RI WKHP KDYH KHDULQJ GHWHULRUDWLRQ DQG PD\ QRW EH FDSDEOH RI GLVFULPLQDWLQJ VRXQGV RU ZRUGVZKHQGLVWXUEHGE\HQYLURQPHQWDOQRLVH>@ & &RJQLWLYH&KDQJHV &RJQLWLYH V\VWHPV DUH V\VWHPV WKDW LQFRUSRUDWH KXPDQ WKLQNLQJ SURFHVVLQJ LQIRUPDWLRQ XVH RI PHPRU\ DQG EHKDYLRU *HQHUDOO\ KXPDQV PDNH GHFLVLRQV EDVHG RQ UHDVRQLQJ DQG ORJLF &RJQLWLYH FKDQJHV DUH LQYROYHG LQWKHGHWHULRUDWLRQRIWKH SHUVRQ¶VPHQWDOVWDWH7KLVFDQ PDQLIHVWLWVHOILQD QXPEHURI ZD\VVXFKDVLQDSSURSULDWH EHKDYLRU GLVWUDFWHGQHVV IRUJHWIXOQHVV RU FRQIXVLRQ 0RUHRYHU WKH FRJQLWLYH FKDQJHV DUH DOVR UHODWHG WR HOGHUO\FKDQJHVLQODQJXDJHDQGYRFDEXODU\>@>@  ,,, 02%,/(3+21($33/,&$7,21$1' 7+( (/'(5/< 7KH SUHYLRXV VHFWLRQ GHVFULEHG WKH ZRUOG¶V FKDQJLQJ GHPRJUDSKLFVGXHWRWKHLQFUHDVLQJQXPEHURIWKHHOGHUO\ SRSXODWLRQ >@ >@ >@ >@  :KHQ FRQVLGHULQJ PRELOH GHYLFHVHVSHFLDOO\VPDUWSKRQHGHYLFHVLWLVUHDVRQDEOHWR VXJJHVWWKDW PRGHUQ LQIRUPDWLRQ WHFKQRORJ\ SURGXFWV DUH FKDQJLQJ SHRSOH OLIHVW\OHV 0RELOH SKRQHV PDNH OLYLQJ HDVLHU EHFDXVH PRELOH SKRQHV DUH QRW RQO\ XVHG IRU FRPPXQLFDWLRQ WKH\ DOVR DOORZ SHRSOH WR PRYH DQ\ZKHUHVWD\ FRQQHFWHG DOOWKHWLPH DQGSURYLGH PDQ\ RWKHUIXQFWLRQDOLWLHV 6HYHUDO ZRUNV LQGLFDWHG WKDW HQFRXUDJLQJ WKH XVH RI PRELOH WHFKQRORJ\ LV RQH RI WKH ZD\V WR LPSURYH WKH

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

TXDOLW\ RI OLIH IRU HOGHUO\ SHRSOH 0RELOH SKRQHV FDQ LPSURYH WKHLU KHDOWK FRPPXQLFDWLRQ OHLVXUH DQG HQYLURQPHQW RI HOGHUO\ SHRSOH >@ >@ >@ >@ +RZHYHU RQO\  RI SHRSOH DJHG  DQG DERYH RZQ D PRELOH SKRQH >@ EHFDXVH PDQ\ RI WKH PRELOH DSSOLFDWLRQV GR QRWFDWHUWRWKHQHHGVRIROGHUXVHUV7KHHOGHUO\XVHUVDUH IDFLQJVHYHUDOSUREOHPV ZLWKFXUUHQW PRELOHSKRQHV VXFK DV WRR PDQ\ FRPSOH[ IXQFWLRQV WLQ\ EXWWRQV VPDOO GLVSOD\VDQGGLIILFXOWNH\SDGV 0RUHRYHUWKHLU FRJQLWLYH FRPSOH[LWLHV VORZ GRZQ WKH UHVSRQVH WLPH RI WKH HOGHUO\ EHFDXVH RI WKH GHFOLQH LQ ORQJ WHUP PHPRU\ ZLWK DJH >@  $OVR WKH HOGHUO\ FDQ EH H[SHFWHG WR KDYH GLIIHUHQW UHTXLUHPHQWV IRU PRELOH SKRQH DSSOLFDWLRQV WKDQ WKH \RXQJHUSHRSOH 7KXV LQVWHDG RI IRFXVLQJ RQ VSHFLDO PRELOH GHYLFHV WKLV SDSHU WULHV WR WDNH WKH SHUVSHFWLYH RI LGHQWLI\LQJ KRZWRGHYHORSPRELOHSKRQHDSSOLFDWLRQVIRUWKHSRSXODU VPDUWSKRQHSODWIRUPVWRPHHWWKHQHHGRIWKHHOGHUO\  ,9

86$%,/,7< 7+(25(7,&$/ %$&.*5281'

 7KLV VHFWLRQ SUHVHQWV WKH GHILQLWLRQV RI XVDELOLW\ DV

SURYLGHGE\ VHYHUDODXWKRUV 6KDFNHOZDV RQHRI WKH ILUVW UHVHDUFKHUV ZKR VWXGLHG WKH ILHOG RI XVDELOLW\ KH FRQVLGHUHGSURGXFWDFFHSWDQFHDVWKHKLJKHVWFRQFHSWDQG XVDELOLW\ DV D SURSHUW\ RI WKH V\VWHP WKDW ZDV UHODWHG WR WKH XVHUV DQG FRXOG DOVR EH PHDVXUHG 7KLV LV KLV GHILQLWLRQ RI XVDELOLW\ ³WKH FDSDELOLW\ LQ KXPDQ IXQFWLRQDO WHUPV WR EH XVHG HDVLO\ DQG HIIHFWLYHO\ E\ WKH VSHFLILHGUDQJHRIXVHUVJLYHQVSHFLILHGWUDLQLQJDQGXVHU VXSSRUWWR IXOILOO WKH VSHFLILHGUDQJH RIWDVNV ZLWKLQ WKH VSHFLILHGUDQJHRIHQYLURQPHQWDOVFHQDULRV´>@ 6KDFNHO SURSRVHG DQ DSSURDFK WR GHILQH XVDELOLW\ E\ IRFXVLQJ RQ WKH SHUFHSWLRQ RI WKH SURGXFW DQG UHJDUGLQJ RQ WKH SURGXFW DFFHSWDQFH DV WKH KLJKHVW OHYHO RI WKH XVDELOLW\ FRQFHSW +RZHYHU 6KDFNHO DFNQRZOHGJHG WKDW WKLV GHILQLWLRQ ZDV VWLOO DPELJXRXV DQG ZHQW RQ WR SURYLGHDVHWRIXVDELOLW\FULWHULD7KHVHDUH (IIHFWLYHQHVV/HYHO RILQWHUDFWLRQ LQ WHUPVRI VSHHG DQGHUURUV /HDUQDELOLW\/HYHORIOHDUQLQJQHHGHGWRDFFRPSOLVK DWDVN )OH[LELOLW\/HYHORIDGDSWDWLRQWRYDULRXVWDVNV $WWLWXGH/HYHORIXVHUVDWLVIDFWLRQZLWKWKHV\VWHP  6KQHLGHUPDQKDVDOVRGHVFULEHGWKHXVDELOLW\HYDOXDWLRQ DWWULEXWHV>@7KHVHILYHPHDVXUDEOHKXPDQIDFWRUVDUH  7LPH WR OHDUQ KRZ ORQJ GRHV LW WDNH IRU W\SLFDO PHPEHUV RI WKH XVHU FRPPXQLW\ WR OHDUQ KRZ WR XVH WKH FRPPDQGVUHOHYDQWWRDVHWRIWDVNV" 6SHHGRISHUIRUPDQFHKRZORQJGRHVLWWDNHWRFDUU\ RXWWKHEHQFKPDUNVHWRIWDVNV" 5DWHRIHUURUVE\ XVHUV+RZPDQ\DQGZKDWNLQGRI HUURUV DUH PDGH LQ FDUU\LQJ RXW WKH EHQFKPDUN VHW RI WDVNV"$OWKRXJKWLPHWRPDNHDQGFRUUHFWHUURUVPLJKWEH LQFRUSRUDWHG LQWRWKHVSHHG RISHUIRUPDQFHHUURU PDNLQJ LVVXFKDFULWLFDOFRPSRQHQWRIV\VWHPXVDJH  5HWHQWLRQ RYHU WLPH KRZ ZHOO GR XVHUV PDLQWDLQ WKHLU NQRZOHGJH DIWHU DQ KRXU D GD\ RU D ZHHN"

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

5HWHQWLRQ PD\ EH OLQNHG FORVHO\ WR WLPH WR OHDUQ DQG IUHTXHQF\RIXVHSOD\VDQLPSRUWDQWUROH  6XEMHFWLYH VDWLVIDFWLRQ KRZ PXFK GLG XVHUV OLNH XVLQJ YDULRXV DVSHFWV RI WKH V\VWHP" 7KH DQVZHU FDQ EH DVFHUWDLQHG E\ LQWHUYLHZ RU E\ ZULWWHQ VXUYH\V WKDW LQFOXGH VDWLVIDFWLRQ VFDOHV DQG VSDFH IRU IUHHIURP FRPPHQWV  1LHOVHQ GHVFULEHG LQ >@ XVDELOLW\ DV D TXDOLW\ DWWULEXWHWKDW DVVHVVHVKRZHDV\ XVHU LQWHUIDFHVDUHWR XVH DQG DOVR UHIHUUHG WR PHWKRGV IRU LPSURYLQJ HDVHRIXVH GXULQJ WKH GHVLJQ SURFHVV 0HDVXUHPHQW RI XVDELOLW\ LV HYDOXDWLQJ KRZ SHRSOH DFKLHYH WKHLU DVVLJQPHQWV DQG SXUSRVHV ZLWK YHU\ OLWWOH UHVSRQVH WLPH DQG HDVH RI XVH ZKHQXWLOL]LQJDSURGXFW RU V\VWHP,W LV DSSOLHGWR HYHU\ SDUW DVSHFW DQG IHDWXUH RI WKH SURGXFW ZLWK ZKLFK WKH XVHUV LQWHUDFW )RU H[DPSOH VRIWZDUH LFRQV PHQXV KDUGZDUH HWF +H DOVR FRQVLGHUV IDFWRUV ZKLFK PD\ LQIOXHQFH SURGXFW DFFHSWDQFH 1LHOVHQ GRHV QRW SURYLGH DQ\ GHVFULSWLYH GHILQLWLRQ RI XVDELOLW\ KRZHYHU KH SURYLGHVRSHUDWLRQDOFULWHULDWR FOHDUO\GHILQHWKH FRQFHSW RIXVDELOLW\  /HDUQDELOLW\ $ELOLW\ WR UHDFK D UHDVRQDEOH OHYHO RI SHUIRUPDQFH  0HPRUDELOLW\ $ELOLW\ WR UHPHPEHU KRZ WR XVH D SURGXFW (IILFLHQF\7UDLQHGXVHUV¶OHYHORISHUIRUPDQFH 6DWLVIDFWLRQ 6XEMHFWLYH DVVHVVPHQW RI KRZ SOHDVXUDEOHLWLVWRXVH  (UURUV 1XPEHU RI HUURUV DELOLW\ WR UHFRYHU IURP HUURUVH[LVWHQFHRIVHULRXVHUURUV  ,62    LV DQ LQWHUQDWLRQDO VWDQGDUG IRU WKH HUJRQRPLF UHTXLUHPHQWV RI RIILFH ZRUN ZLWK YLVXDO GLVSOD\ WHUPLQDOV 7KLV VWDQGDUG GHILQHV XVDELOLW\ DV ³WKH H[WHQWWR ZKLFKD SURGXFWFDQ EHXVHG E\VSHFLILHG XVHUV WR DFKLHYH VSHFLILHG JRDOV ZLWK HIIHFWLYHQHVV HIILFLHQF\ DQG VDWLVIDFWLRQ LQ D VSHFLILHG FRQWH[W RI XVH´ >@ $GGLWLRQDOO\ ,62  FODVVLILHV WKH GLPHQVLRQV RI XVDELOLW\WRDFFRXQWIRUWKHGHILQLWLRQ  (IIHFWLYHQHVV WKH DFFXUDF\ DQG FRPSOHWHQHVV ZLWK ZKLFKXVHUVDFKLHYHJRDOV  (IILFLHQF\ WKH UHVRXUFHV H[SHQGHG LQ UHODWLRQ WR WKH DFFXUDF\DQGFRPSOHWHQHVV 6DWLVIDFWLRQWKHFRPIRUWDQGDFFHSWDELOLW\RIXVH  $V ZH PHQWLRQHG DERYH WKHUH DUH PDQ\ XVDELOLW\ VWDQGDUGV 0RUHRYHU WKHUH DUH RWKHU XVDELOLW\ VWDQGDUGV DQG XVDELOLW\ JXLGHOLQHV ZKLFK DUH QRW GHVFULEHG LQ WKLV SDSHU )URP WKH PDQ\ ZRUNV WKDW ZH VWXGLHG LW KDV EHFRPH REYLRXV PDQ\ XVDELOLW\ IDFWRUV KDYH EHHQ GXSOLFDWHG  8VDELOLW\ IDFWRUV FDQ EH FDWHJRUL]HG LQWR VL[ DWWULEXWHVZKLFKDUHUHSUHVHQWHGLQ7DEOH

17

ISSN: 2088-6578

Yogyakarta, 12 July 2012



WKH GHYHORSHU RXJKW WR FRQVLGHU ZKHWKHU WKH FKDQJHV DUH UDWLRQDORUQRW

7$%/(, 86$%,/,7<&5,7(5,$&$7(*25,(6  6KDFNHO  



x 

1LHOVHQ   

x

x



x

x



x

x



x

x



x

x

(/'(5/<&+$1*(6 

 7KLV VHFWLRQ SUHVHQWV WKH UHODWLRQ EHWZHHQ WKH HOGHUO\ FKDQJHVDQGXVDELOLW\IDFWRUV

)OH[LELOLW\

x

x



ZKLFK LV PDQLIHVWHG LQ WKH 6KDFNHO DQG ,62   VWDQGDUGV 6RIWZDUH TXDOLW\ DVVXUDQFH DFFXUDF\ DQG FRPSOHWHQHVV RI VRIWZDUH IXQFWLRQV FDQ LQGLFDWH HIIHFWLYHQHVV EXW FDSDFLW\ RI VXSSRUWLQJ WDVN DFKLHYHPHQW ZLWKWKHOHDVW SRVVLEOHUHVSRQVH WLPHFDQ EH XVHGIRUHIILFLHQF\PHDVXUHPHQW ,Q RWKHU ZRUGV HIILFLHQF\ FDQ EH PHDVXUHG E\ WKH DPRXQW RI WLPH WKDW XVHUV WDNH WR DFKLHYH D JRDO  7KH VSHHG RI WLPH FDQ DOVR EH FDWHJRUL]HG DV HIILFLHQF\ >@ >@ >@ 7KH WKLUG IDFWRU LV OHDUQDELOLW\ DQ LPSRUWDQW IDFWRU EHFDXVH LW PHDQV WKH IXQFWLRQDOLW\ RI WKH PRELOH SKRQHLVHDV\WROHDUQ$XWKRUVWKDWGHVFULEHGOHDUQDELOWL\ DVZHOO DV RWKHU DWWULEXWHVDUH 6KDFNHO 6KQHLGHUPDQ DQG 1LHOVHQ ,Q WKLV ZRUN ZH FDQ FDWHJRUL]HG OHDUQDELOLW\ WLPH WR OHDUQ UHWHQWLRQ RYHU WLPH PHPRUDELOLW\ LQ WKH VDPH FDWHJRU\ 7KH QH[W IDFWRU ZKLFK LV FDWHJRUL]HG LQ 7DEOH LV WKH QXPEHU RI HUURUV 7KH DXWKRUV WKDW GHVFULEHG WKLV IDFWRU DUH 6KQHLGHUPDQ ZKR ZDV FRQFHUQHG ZLWK WKH LPSOLFDWLRQ LQ QXPEHU RI HUURUV DQG 1LHOVHQ DOVR GHVFULEHG WKH DWWULEXWH DV WKH UDWH RI HUURUV E\XVHUV7KHODVWIDFWRUWKDWLVFDWHJRUL]HGLQWKLVZRUNLV LQYROYHGZLWK WKH XVDJH LV VXEMHFWLYH RSLQLRQ 7KH XVDJH VXEMHFWLYH RSLQLRQ FDQ EH PHDVXUHG LQWR WKH VDWLVIDFWLRQ VFRUH 7KLV IDFWRU LV DQ LPSRUWDQW XVDELOLW\ IDFWRU 6KDFNHO 6KQHLGHUPDQ 1LHOVHQ DQG WKH ,62   VWDQGDUG DOVR LQFOXGHG WKLV DWWULEXWH LQ WKHLU XVDELOLW\VWDQGDUGV )OH[LELOLW\ FDQQRW PHUJH LQWR RWKHU XVDELOLW\ IDFWRUV EHFDXVH EDVHG RQ SUHYLRXV ZRUN WKH PHDVXUHPHQW RI IOH[LELOLW\ZDVQRWLGHQWLILHG &RQVHTXHQWO\ WKH DLP RI WKLV SDSHU LV WR WDNH WKH SHUVSHFWLYH RI PRELOH DSSOLFDWLRQ GHYHORSPHQW IRU ZHOO NQRZQVPDUWSKRQHSODWIRUPVLQUHVSRQGLQJWR WKH HOGHUV¶ QHHGV UDWKHU WKDQ SD\LQJ DWWHQWLRQ WR VSHFLDO PRELOH GHYLFHV:KHQ GHVLJQLQJ VRPHWKLQJ QHZ IRU WKH GHYLFHV

( OGHUO\&KDQJHV  3K\VLFDO +DQGIXQFWLRQSHUIRUPDQFH GHFOLQHG  3HUFHSWLRQ 9LVLRQGHFOLQHG $XGLWRU\GHFOLQHG  &RJQLWLYH  &RJQLW LYH GHFOLQHG  

6DW L VID F W L R Q 

x

5D W H  RI  H UUR UV 

 $WWLWXGH6XEMHFWLYH VDWLVIDFWLRQ 6DWLVIDFWLRQ 

/H D UQ D E L O L W \ 



7$%/(,, 86$%,/,7<)$&7255(/$7('72(/'(5/<&+$1*(6 8 VDELOLW\IDFWRUV



(I IL F L H Q F \ 

x

 )URP 7DEOH ZH FDQ VXPPDUL]H WKH HIIHFWLYHQHVV

18

9 86$%,/,7<)$&72565(/$7('72

,62   



 /HDUQDELOLW\7LPHWR OHDUQ5HWHQWLRQRYHU WLPH0HPRUDELOLW\  1XPEHURIHUURUV 5DWHRIHUURUVE\ XVHUV

6KQHLGHUPDQ  



(I IH F W L Y H Q HVV 

 8VDELOLW\)DFWRU   (IIHFWLYHQHVV  (IILFLHQF\6SHHGRI SHUIRUPDQFH

CITEE 2012







































 

 

























 7DEOH  GLVSOD\V WKH UHODWLRQVKLS EHWZHHQ HOGHUO\ FKDQJHV DQG XVDELOLW\ IDFWRUV :KHQ GHVFULELQJ WKH XVDELOLW\IDFWRURI WKHHIIHFWLYHQHVVDQGHIILFLHQF\ DVSHFW PRVW RI XVDELOLW\ VWDQGDUGV PHQWLRQHG WKH HIIHFWLYHQHVV DVWKH DFFXUDF\ WRSHUIRUP WKH WDVNDQG WKH HIILFLHQF\ LV WKHWLPHWKDWWKHXVHUWDNHVWRFRPSOHWHWKHWDVN  (IIHFWLYHQHVV FDQ EH UHODWHG WR KDQG IXQFWLRQ SHUIRUPDQFH YLVLRQ GHFOLQH DXGLWRU\ GHFOLQH DQG FRJQLWLYHSHUIRUPDQFHGHFOLQH  (IILFLHQF\ FDQ EH UHODWHG WR KDQG IXQFWLRQ SHUIRUPDQFH YLVLRQ GHFOLQH DXGLWRU\ GHFOLQH DQG FRJQLWLYHSHUIRUPDQFHGHFOLQH  /HDUQDELOLW\ FDQ EH GLUHFWO\ UHODWHG WR FRJQLWLYH GHFOLQH EHFDXVH ZKHQ WKH HOGHUO\ XVH FRPSOH[ PRELOH SKRQHDSSOLFDWLRQVWKH\ZLOOXVHDJUHDWHU FRJQLWLYH ORDG WRFRPSOHWHWKHWDVN  5DWH RI HUURUV FDQ EH UHODWHG WR KDQG IXQFWLRQ SHUIRUPDQFH YLVLRQ GHFOLQH DXGLWRU\ GHFOLQH DQG FRJQLWLYHSHUIRUPDQFHGHFOLQH 6DWLVIDFWLRQLQYROYHGDOOHOGHUO\FKDQJHV VDWLVIDFWLRQ UHIHUVWRWKHVXEMHFWLYHUHVSRQVHIURPWKHXVHUDERXWWKHLU IHHOLQJVZKHQWKH\DUHXVLQJWKHVRIWZDUH  7R UHGXFH WKH FRJQLWLYH ORDGLQJ UHTXLUHV UHGXFLQJ WKH PHQX IXQFWLRQV FRPSOH[LW\ SURYLGLQJ D ODUJHU IRQW VL]H DUHUHOHYDQWIDFWRUVWKDWLQFUHDVHHIIHFWLYHQHVVHIILFLHQF\ VDWLVIDFWLRQ DQG DOVR GHFUHDVH WKH QXPEHU RI HUURUV LQ PRELOH SKRQH XVH :KHQ WKH HOGHUO\ FDQ VHH WKH FRPPDQGWH[W RQ WKH PRELOH SKRQH DSSOLFDWLRQ WKH\ FDQ

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

UHDG DQG FKHFN WKH FRUUHFW LQVWUXFWLRQV RI PHQX FRPPDQGVDQGRSHUDWHWKHPRELOHSKRQHHDVLO\  9, 7+( $335235,$7( 02%,/( 3+21($33/,&$7,21  )25(/'(5/< 7KH SUHYLRXV VHFWLRQ LQWURGXFHG HOGHUO\ FKDQJHV DQG XVDELOLW\ *URZLQJ ROG RIIHU PDQ\ SOHDVXUHV DQG DOO WKH EHQHILWV RI H[SHULHQFH EXW DJLQJ FDQ DOVR KDYH QHJDWLYH SK\VLFDO FRJQLWLYH DQG VRFLDO FRQVHTXHQFHV )RU PDNLQJ PRELOHSKRQHV DFFHSWDEOHWR WKHHOGHUO\ LWLV HVVHQWLDO WR GHYHORS D PRELOH SKRQH LQWHUIDFH DEOH WR DVVLVW LQ DGGUHVVLQJSK\VLFDOVHQVRU\DQGFRJQLWLYHFKDQJHV 3K\VLFDO GHFOLQH LQ WKH HOGHUO\ UHGXFHV WKHLU KDQG IXQFWLRQ VKLIWLQJ FDSDELOLWLHV $Q LQWHUHVWLQJ IHDWXUH WKDW GHYHORSHUVZLOO QHHG WRDGGUHVVWR VXSSRUWWKHVH SK\VLFDO FKDQJHVUHTXLUHPLQLPL]HDFWLRQRI XVHUHUURUVVXFKDVD NH\SDG DXWR ORFN H[WUD FRQILUPDWLRQ GLDORJ DQG QRWLFHDEOHUHPLQGHUVDUHGHVLUDEOH 9LVXDOGHFOLQHLQHOGHUO\XVHUVGHFUHDVHVWKHLUFDSDFLW\ WR IRFXV RQ D VPDOO PRELOH VFUHHQ 7KLV UHVXOWV LQ WKH HOGHUO\ DYRLGLQJ PRELOH SKRQHV ZLWK VPDOO GLVSOD\V ZKLFK DUH GLIILFXOW WR VHH )RU VXSSRUWLQJ YLVLRQ GHFOLQH LQ WKH HOGHUO\ WKH PRELOH DSSOLFDWLRQ GHYHORSHU VKRXOG H[DPLQH GHSOR\LQJ ODUJHU IRQW VL]HV 7KH IRQW VL]H UHFRPPHQGHG IRU WKH HOGHUO\ LV EHWZHHQ SW WR  SW >@  $Q DSSOLFDWLRQ VKRXOG LQFOXGH YLVXDO DLGV WKDW SURYLGH ODUJH WH[W NH\SDG OLJKW VFUHHQ OLJKW EROG FRORU VFKHPH DQG ELJ EXWWRQV ZKLFK DUH UDWHG DV KLJK SULRULW\ IHDWXUHV >@ $XGLWRU\ GHFOLQH LQ WKH HOGHUO\ ZHDNHQV WKHLUKHDULQJFDSDELOLWLHV$OWHUQDWLYHVDSSOLFDWLRQVZKLFK VXSSRUWKHDULQJORVVPXVWEHGHYHORSHG 7KH DSSURSULDWH DSSOLFDWLRQ VXLWDEOH DV DQ DOWHUQDWLYH RXJKWWR VXSSRUW DGPLQLVWUDWLRQRIRSWLRQDO WH[WLQSXW IRU DQ\QRQWH[WFRQWHQWLQRUGHUWKDWLWLVDEOHWREHFKDQJHG LQWRRWKHUIRUPVWKDWWKHXVHUVUHTXLUHQDPHO\ODUJHSULQW VSHHFKV\PEROVRUVLPSOHUODQJXDJH 0RUHRYHU YLVXDO GHFOLQH DOVR LPSDFWV FRJQLWLYH GHFOLQH DV WKH UHVSRQVH WR LQWHUDFW ZLWK PRELOH SKRQH VORZV GRZQ EHFDXVH WKH SHUVRQ FDQQRW IRFXV RQ WKH REMHFWV LQ WKH PRELOH SKRQH DSSOLFDWLRQ DV WKH\ FDQQRW UHDGDQGLQWHUSUHWPRELOHSKRQHPHQXLQVWUXFWLRQV &RJQLWLYH GHFOLQH LQWKHHOGHUO\GLPLQLVKHV WKH KXPDQ DELOLW\WRSURFHVVLQIRUPDWLRQ7KLVUHVXOWVLQ WKHHOGHUO\ DYRLGLQJ PRELOH SKRQHV ZLWK FRPSOH[ PHQX IXQFWLRQV DQG DPELJXRXV LQVWUXFWLRQV &RQVHTXHQWO\ WKH UHGXFWLRQ PHPRU\ DELOLW\ EHFRPHV D IDFWRU WR FRQVLGHU $Q DSSOLFDWLRQZKLFKFRQVLGHUVWKHFRJQLWLYHDVSHFWLQFOXGHV PHPRU\ DLGV DQG WKH GHYHORSHU VKRXOG EH FRQFHUQHG DERXW FRQVLVWHQF\ RI PHQX IXQFWLRQ :LWKRXW GRXEW LQVWUXFWLRQVIRUV\VWHPXWLOL]DWLRQVKRXOGEHQRWLFHDEOHWR XVHUV LQ RUGHU WR DYRLG FRQIXVLRQ RI LQWHUSUHWDWLRQ )XUWKHUPRUH WKH LQVWUXFWLRQV RXJKW WR EH UDWLRQDO )RU H[DPSOH LW ZRXOG EH EHWWHU LI WKH V\VWHP FDQ VSHDN WKH XVHUV¶ODQJXDJHVLQVWHDGRIILQGLQJV\VWHPRULHQWHGWHUPV ZKLFKDUHVRPHWLPHVGLIILFXOWWRXQGHUVWDQG $Q HVVHQWLDO PRELOH SKRQH DSSOLFDWLRQ GHYHORSPHQW LVVXH WKDW GHYHORSHUV VKRXOG EH FRQFHUQHG ZLWK LV UHGXFLQJFRPSOH[LW\ RIPHQXIXQFWLRQVRQPRELOHSKRQH 5HGXFLQJ WKH FRPSOH[LW\ KDV D GLUHFW HIIHFW RQ WKH

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

FRJQLWLYH SHUIRUPDQFH UHODWHG WR WKH XVDELOLW\ IDFWRU LQ WKH HIILFLHQF\ DVSHFW :KHQ GHVFULELQJ WKH XVDELOLW\ IDFWRU LQ WKH HIILFLHQF\ DVSHFW PRVW RI WKH XVDELOLW\ VWDQGDUGV PHQWLRQHG WKDW HIILFLHQF\ LV WKH WLPH UHTXLUHG WR FRPSOHWH WKH WDVN 7KHUHIRUH  E\ UHGXFLQJ PHQX IXQFWLRQ FRPSOH[LW\ WKH WLPH WR FRPSOHWH WKH WDVN ZLOO GHFUHDVH WRR 0RUHRYHU LI ZH SURGXFH PRELOH DSSOLFDWLRQV ZKLFK DGGUHVV FRJQLWLYH SHUIRUPDQFH GHFOLQHWKHHIIHFWLYHQHVVHIILFLHQF\DQGVDWLVIDFWLRQZLOO LQFUHDVH DQG WKH QXPEHU RI HUURUV IURP WKH FRQIXVLRQ RI WKH PHQX IXQFWLRQ FRPSOH[LW\ ZLOO GHFUHDVH 7KLV DOVR PDNHVLW HDVLHU IRUWKH HOGHUO\ WRPHPRUL]H WKH KLHUDUFK\ RIWKHPHQXIXQFWLRQVTXLFNO\ORZHULQJWKHWLPHUHTXLUHG WROHDUQKRZWRXVHDQDSSOLFDWLRQ  9,,

&21&/86,21$1')8785( :25.

,Q RUGHU WR LPSURYH WKH TXDOLW\ RI OLIH IRU VHQLRU FLWL]HQV GHYHORSHU PXVW DGGUHVV DJH UHODWHG IDFWRUV *URZLQJ ROG RIIHUV HOGHUO\ RIIHUV PDQ\ SOHDVXUHV FRPELQHG ZLWK WKH EHQHILWV RI H[SHULHQFH EXW DJLQJ FDQ DOVR KDYH QHJDWLYH SK\VLFDO FRJQLWLYH DQG VRFLDO FRQVHTXHQFHV :LWK UHVSHFW WR HOGHUO\ XVHUV PRELOH SKRQH LQWHUIDFH GHYHORSHU QHHG WR WDNH DJHUHODWHG GHFOLQH LQ SK\VLFDO VHQVRU\ DQG FRJQLWLYH DELOLWLHV LQWR DFFRXQW 7KLV SDSHULQYHVWLJDWHGXVDELOLW\IDFWRUVDQGLGHQWLILHG HOGHUO\ OLPLWDWLRQV ZKLFK DUH SK\VLFDO VHQVRU\ DQG FRJQLWLYH FKDQJHV LQ RUGHU WR KLJKOLJKWLVVXHV WR PRELOH DSSOLFDWLRQGHYHORSHUV 7KHPRGHOZKLFKZH SURSRVHLVD FRQFHSWXDOXVDELOLW\ PRGHOWRVXSSRUWWKHHOGHUO\OLPLWDWLRQVLQXVLQJDPRELOH SKRQHDSSOLFDWLRQDQGOHDGVWRDEHWWHUSHUIRUPLQJPRELOH DSSOLFDWLRQGHYHORSPHQWSURFHVV ,QWKHIXWXUHH[WHQVLYHZRUNLVQHHGHGWRHQKDQFHWKH XVDELOLW\ WHVWLQJ IRU HOGHUO\ SHRSOH 7KHUH DUH VHYHUDO UHVHDUFKHV ZKLFK H[SODLQHG XVDELOLW\ PHDVXUHPHQW DQG ZKLOH JHQHUDO XVDELOLW\ WHVWLQJ ZDV GHVLJQHG IRU WKH JHQHUDO XVHU HOGHUO\ FKDQJHV IDFWRU ZHUH QRW LQFOXGHG 0DQ\XVDELOLW\HYDOXDWLRQVGRQRWDGGUHVVWKH UHDOQHHGV RI WKH HOGHUO\ WKHUHIRUH XVDELOLW\ IDFWRUV DQG JDWKHULQJ UHTXLUHPHQWV EHFRPHV DQ LPSRUWDQW IDFWRU LQ VHWWLQJ XS HOGHUO\XVDELOLW\WHVWLQJ 5()(5(1&(6 >@ 0DJQLWXGH DQG 6SHHG RI 3RSXODWLRQ $JHLQJ :RUOG 3RSXODWLRQ $JHLQJ3RSXODWLRQ'LYLVLRQ'(6$8QLWHG1DWLRQV  >@ 3RSXODWLRQ &KDOOHQJHV DQG 'HYHORSPHQW *RDOV 'HSDUWPHQW RI (FRQRPLFDQG6RFLDO$IIDLU8QLWHG1DWLRQV1HZ@ -,&$   $JLQJ 3RSXODWLRQ LQ $VLD ([SHULHQFH RI -DSDQ 7KDLODQGDQG&KLQD6HPLQDU5HSRUWRQWKH3DUDOOHO6HVVLRQDWWKH WK $QQXDO *OREDO 'HYHORSPHQW &RQIHUHQFH 5HVHDUFK *URXS ,QVWLWXWHIRU,QWHUQDWLRQDO&RRSHUDWLRQ-DQXDU\ %HLMLQJ &KLQD  >@ 0RUUHOO5:0D\KRUQ&% %HQQHWW-  $VXUYH\RI :RUOG :LGH :HE XVH LQ 0LGGOHDJHG DQG ROGHU DGXOWV +XPDQ )DFWRUV 

19

ISSN: 2088-6578

Yogyakarta, 12 July 2012

>@ 0 &RQFL ) 3LDQHVL 0 =DQFDQDUR 8VHIXO 6RFLDO DQG (QMR\DEOH 0RELOH 3KRQH $GRSWLRQ E\ 2OGHU 3HRSOH )%.LUVW 7UHQWR,WDO\  >@ :LOVRQ 3 DQG /HVVHQH 9  +HDOWK DQG KHDOWK SROLFLHV 6\QHUJLHVIRUEHWWHUKHDOWKLQD(XURSHRI5HJLRQ+LJKOHYHO FRQIHUHQFH H[KLELWLRQ DQG DVVRFLDWHG HYHQWV 0DODJD 6SDLQ 0DUFK  >[email protected] $YDLODEOH KWWSZZZVSROHF]HQVWZRLQIRUPDF\MQHSOXSORDGVDUWLFOHVBILOHV SGI  >@ +HOOVWURP < DQG +DOOEHUJ ,5 3HUVSHFWLYHV RI HOGHUO\ SHRSOH UHFHLYLQJ KRPH KHOS RQ KHDOWK FDUH DQG TXDOLW\ RI OLIH 'HSDUWPHQW RI 6FLHQFH DQG +HDOWK %OHNLQJH ,QVWLWXWH RI 7HFKQRORJ\6(.DUOVNURQD6ZHGHQ  >@ 7UHQW$-XG\.DQG$DURQ47DEOHWRS6KDULQJRI'LJLWDO 3KRWRJUDSKV IRU WKH (OGHUO\ 6,*&+, &RQIHUHQFH RQ +XPDQ )DFWRUV LQ &RPSXWLQJ 6\VWHPV &+,  $SULO   0RQWUpDO4XpEHF&DQDGD   >@ ,MVVHOVWHLMQ:'LJLWDOJDPHGHVLJQIRUHOGHUO\XVHUV3URFHHGLQJV RIWKHFRQIHUHQFHRQ)XWXUH3ODWSS1HZ@ ,EUDKLP . DQG 0 +DOLG . )URP (JRYHUQPHQW WR 0 JRYHUQPHQW )DFLQJ WKH ,QHYLWDEOH 0RELOH *RYHUQPHQW /DE P*RY/DE 0D\  >@ .XUQLDZDQ 6 0RELOH 3KRQH 'HVLJQ IRU 2GHU 3HUVRQ ,Q GHVLJQLQJ IRU 6HQLRUV ,QQRYDWLRQV IRU *UD\ WLPHV      SS  >@ 3DWWLVRQ0DQG6WHGPRQ$,QFOXVLYHGHVLJQDQGKXPDQIDFWRUV GHVLJQLQJ PRELOH SKRQHV IRU ROGHU XVHUV 3V\FK1RORJ\ -RXUQDO 9ROXPH1XPEHUSS  >@ $ +RO]LQJHU * 6HDUOH DQG $ 1LVFKHOZLW]HU 2Q VRPH $VSHFWV RI,PSURYLQJ0RELOH$SSOLFDWLRQVIRUWKH(OGHUO\,Q6WHSKDQLGLV & HG  +&,  /1&6 YRO  SS ± 6SULQJHU +HLGHOEHUJ    >@ 0 =LHIOH DQG 6 %D\ +RZ ROGHU DGXOWV PHHW FRPSOH[LW\ $JLQJ HIIHFWVRQ WKH XVDELOLW\RI GLIIHUHQWPRELOHSKRQHV'HSDUWPHQWRI 3V\FKRORJ\ 5:7+ $DFKHQ 8QLYHUVLW\ *HUPDQ\ 2QOLQH SXEOLFDWLRQ'DWH6HSWHPEHU  >@ .XUQLDZDQ 6 1XJURKR < 0DKPXG 0 $ 6WXG\ RI WKH 8VH RI 0RELOH 3KRQHV E\ 2OGHU 3HUVRQV &+,  $SULO ±  0RQWUHDO4XHEHF&DQDGD$&0  >@ 6KDFNHO % 8VDELOLW\  &RQWH[W )UDPHZRUN 'HILQLWLRQ 'HVLJQ DQG (YDOXDWLRQ ,Q % 6KDFNHO DQG 6 5LFKDUGVRQ HGV  +XPDQ )DFWRUV IRU ,QIRUPDWLFV 8VDELOLW\ &DPEULGJH &DPEULGJH 8QLYHUVLW\3UHVV   SS  >@ 6KQHLGHUPDQ % 'HVLJQLQJ WKH 8VHU ,QWHUIDFH 6WUDWHJLHV IRU (IIHFWLYH +XPDQ&RPSXWHU ,QWHUDFWLRQ VW HGLWLRQ $GGLVRQ :HVOH\ >@ 1LHOVHQ - 8VDELOLW\ (QJLQHHULQJ &DPEULGJH $FDGHPLF 3UHVV   >@ (1 ,62  (UJRQRPLF UHTXLUHPHQWV IRU RIILFH ZRUN ZLWK YLVXDO GLVSOD\ WHUPLQDOV 3DUW  *XLGDQFH RQ XVDELOLW\ %HXWK %HUOLQ*HUPDQ\  

CITEE 2012

>@ 1LDPK & -RKQ * DQG 1LFROD 3 $ 5HYLHZ RI 0HPRU\ $LG 'HYLFHV IRU DQ $JHLQJ 3RSXODWLRQ 3V\FK1RORJ\ -RXUQDO 63(&,$/,668( 'HVLJQLQJ7HFKQRORJ\WRPHHWWKHQHHGVRI WKH 2OGHU8VHU9ROXPH1XPEHUSS  >@ +HLWLQJ * - +RZ [email protected] $YDLODEOH KWWSZZZDOODERXWYLVLRQFRPRYHUYLVLRQ FKDQJHVKWP>6HSWHPEHU@  >@ 6D[RQ96(WWHQ -0 DQG3HUNLQV$(3K\VLFDOFKDQJH  DJLQJ D JXLGH IRU WKH KHOSLQJ SURIHVVLRQV 6SULQJHU 3XEOLVKLQJ &RPSDQ\  >@ KWWSZZZXQRUJ$FFHVVHG2FWREHU  >@ /RUHQ] $ DQG 2SSHUPDQQ 5  0RELOH KHDOWK PRQLWRULQJ IRUWKHHOGHUO\'HVLJQLQJIRU GLYHUVLW\3HUYDVLYHDQG0RELOH&RPSXWLQJSS  >@ 1LHOVHQ - 8VDELOLW\ IRU 6HQLRU &LWL]HQV -DFRE 1LHOVHQ¶V $OHUWER[ $SULO    >[email protected] $YDLODEOH KWWSZZZXVHLWFRPDOHUWER[KWPO>0DUFK@  >@ /HZLV - 5   ,%0 FRPSXWHU XVDELOLW\ VDWLVIDFWLRQ TXHVWLRQQDLUHV 3V\FKRPHWULF HYDOXDWLRQ DQG LQVWUXFWLRQV IRU XVH 7HFK 5HSRUW   %RFD 5DWRQ )/ ,%0 &RUS KWWSGUMLPFDWFKFRPXVDETWUSGI 

 >@ 6PLWK 6 DQG *RYH ( -   3K\VLFDO &KDQJHV RI $JLQJ ,QVWLWXWHRI)RRGDQG$JULFXOWXUDO6FLHQFHV8QLYHUVLW\RI)ORULGD >[email protected]$YDLODEOHKWWSHGLVLIDVXIOHGXKH  >@ 177  ,&7VHUYLFHGHVLJQIRUVHQLRUFLWL]HQVEDVHGRQDJLQJ FKDUDFWHULVWLFV 177 7HFKQLFDO 5HYLHZ 9ROXPH 1R 6HSWHPEHU 

20

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Test on Interface Design and Usability of Indonesia Official Tourism Website Anindito Yoga Pratama1), Dea Adlina2), Nadia Rahmah Al Mukarrohmah3), Puji Sularsih4), Dewi Agushinta R.5) 1,2,3 Informatics Engineering Department, Faculty of Industrial Engineering Information System Department, Faculty of Computer Science and Information Technology Gunadarma University Margonda Raya No.100, Pondok Cina, Depok, West Java, 16424 1), 2), 3) {aga.ti_92, dea.adlina_ti, n4dh1_ch4n}@student.gunadarma.ac.id, 4), 5){puji, dewiar}@staff.gunadarma.ac.id 4, 5

Abstract— Indonesia tourism and travel information should reach people from all levels of society around the world. To do this, Ministry of Tourism and Creative Economy of the Republic of Indonesia must meet the wide audience various needs. The case study in this article is the website "Visit Indonesia-Indonesia Official Website for Tourism and Travel Information" which can be accessed at http://indonesia.travel/. We chose this case study to see the website interface design and usability, in addition, we are going to popularize Indonesia tourism and travel highlights. The purpose of this article is to see the website’s usability functions, to evaluate the website interface design and to give some suggestions in order to make a better interface design that will escalate the website usability.

Interface development for tourism website must actively involving user from planning trough evaluation. If a user feels uncomfortable in using an application or a product or a service then it can be assumed that it is difficult to use and has the potential to be a failure. If a website is a failure then there will be a certain loss in all of the money spent in the development of the website, loss in reaching market success, user disappointment, trip plans cancelation, bad image and business process disturbance. Usability is a term used to indicate that people can employ a particular tool with ease in order to achieve certain goals. Usability can also refer to the method used to measure the usability and the study of neatness or efficiency of an object. LITERATURE REVIEW

Advance interface characteristics [1]: a.

design

has

the

following

Standardization : The uniformity of the properties of user interfaces in different applications.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

d.

Integration : Integration of packaged applications and software tools. Consistency : Uniformity in the application program. Portability : The possibility to convert data in a variety of hardware and software.

There are several things that cause reduced levels of usability of an interface design system, they are: a.

c.

INTRODUCTION

Websites are used daily for reading news, finding work vacancy, shopping, finding telephone information, ordering food, planning for a trip, selling products and even helping a company business process. Services concerning customers such as customer care in a company, internet banking service, online reservation service, product promotion and marketing, project management or even e-learning are examples of many services using web based application.

II.

c.

b.

Keywords - Interface, Usability, Website

I.

b.

d.

e. f.

Text not yet clear and precise wording that is not in doubt ask the cause and finally re-read, which allows users in interpreting it wrong. The graphics are not exactly that important elements are hidden. The title is not representative. It also creates confusion and hinders the ability of in view of the relationships that exist. Requests for information that is not important or irrelevant, information requests require a rethinking of the previous answers that confuse users which in turn lead to errors. Layout is not structured and directed that allow the occurrence of errors. Poor quality of presentation, it is difficult to read, would reduce the ability of users and cause the error again.

There are many usability methods and principles that exist such as usability inspections methods and discount usability methods [2], formative and summative usability evaluations [3]. These methods usually may also accompany think-a-loud protocols and competitive analysis. In any usability evaluation, there are always discussions regarding how many users are enough for a test. A study by Nielsen [4] further suggests that five users are enough. Research by Faulkner [5] suggests that as many as 85% of usability problems but that as few as 55% could be found as well with using only five users. With increasing the number of users to 15, the range of problems found can be 90-97%. Usability dimensions which are classified Whitney Quesenbery [6], as follows: Effective Effective is the first E. If a user cannot actually do something he or she set out to do. It probably doesn’t matter whether the experience was short or long, easy or hard. In the end, they have failed to complete their tasks or meet their goals. If we want to measure

21

ISSN: 2088-6578

Yogyakarta, 12 July 2012

effectiveness, we have to define success or usefulness, whether this is relatively straight forward or more subtle. Efficient Efficiency may be something that is carefully defined, for example in a call center where operators are measured on the number of calls they can handle in a day. It may be a subjective judgment when a task is taking “too long” or “too many clicks.” Engaging “Engaging” replaces “satisfaction,” looking for a word that suggests the ways that the interface can draw someone into a site or a task. It also looks at the quality of the interaction, or how well the user can connect with the way the product is presented and organized. Error tolerant It would be lovely to say “error free” or “prevents errors” but mistakes, accidents and misunderstandings will happen. The cat nudges the mouse as you click. You misread a link and need to find your way back, or enter a number with a type. The real test is how helpful the interface is when an error does occur. Easy to learn A product may be used just once, once in a while, or on a daily basis. It may support a task that is easy or complex; and the user may be an expert or a novice in this task. But every time it is used the interface must be remembered or relearned, and new areas of the product may be explored over time. Usability is defined by 5 quality components [7]: Learnability How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency Once users have learned the design, how quickly can they perform tasks? Memorability When users return to the design after not using it for a certain period, how easily can they reestablish proficiency? Errors How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction

How pleasant is it to use the design? III. METHODOLOGY

The broad information on tourism triggers users to seek information on a website. Therefore in this research, testing was conducted by collecting the required data from users. Data is collected by using survey methods and testing the formal usability to users. The respondents are selected by considering their background, activity, knowledge, skills, and frequency of Internet use. Formal usability testing questionnaire is given to 30 respondents with 3 different types of respondents which are college student, lecturer, and general public. The three respondents comprised the skilled and unskilled in using computers and the Internet. The survey was conducted to obtain early feedback, each function that is considered a problem for users and to the user general perception of the application. Analyses were performed according to the criteria of usability tests mentioned before.

22

CITEE 2012

From the analysis done by looking at the criteria specified for testing combined with the results of the questionnaire, we will give advice(s) in accordance with the results that we have processed from data gathering and testing. Criteria used for the usability tests are easy to use, easy to learn (terms used, system speed, time, and consistency), system fault, and the language that should be used in Indonesia Official Tourism Website [8]. IV. RESULT AND DISCUSSION This section will discuss the results of data collection that was carried out, the usability levels of Indonesia Official Tourism Website, and the problems encountered in Indonesia Official Tourism Website. A. Usability Tests Measurements of the usability tests of Indonesia Official Tourism Website can be seen from the questionnaire filled out by respondents, which can represent any question of the criteria used for the usability tests. 10 questions given in the questionnaire are as follows: 1.

Is there any latest news on Indonesia Official Tourism Website? 2. Are the destination search results satisfactory? 3. Are the activities search results satisfactory? 4. Is the applied language appropriate? 5. Does the image on the moments captured considered as good? 6. Are the locations shown on the Map of Indonesia correct? 7. Is the information obtained in event list complete? 8. Is the information obtained in the FAQ’s complete? 9. Is there any difficulty in filling the register form on the Register menu? 10. Is the Contact Us menu easy to use? The 10 questions on the questionnaire are answered as “Yes” and “No”, if the answer is “No”, there are entries for the reasons and suggestions. The results of the analysis and the questions asked of respondents look like in table 1. TABLE I.

RESULT OF ANALYSIS AND QUESTION

Usability Levels

Result (%)

Easy To Use

83.33

Easy To Learn

85.83

System Fault

45.56

Language Used

100

The following is an explanation of the analysis and answers from the respondents conducted in accordance with the criteria of usability tests was:

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

1) Easy To Use Easy to use in here is intended to operate the menus or elements contained in the Indonesia Official Tourism Website can be operated easily. 2) Easy to learn The easy to learn of the menus or elements contained in the Indonesia Official Tourism Website contents measurements can be seen in the following aspects: Terms Used, System Speed, Time, and Consistency. A detailed explanation of each of aspects is as follows. Terms Used Due to the limited knowledge, amongst the overall terms contained in Indonesia Official Tourism Website used such as the name of the menu, submenu, menu page headers, field lines, captions, etc., there are two terms that is difficult to understand by the respondents and the authors. These terms are: MICE, Ecotourism System Speed From the analysis the authors state that speed of Indonesia Official Tourism Website is quite fast in responding to any command given. However responds speed depends on the speed of the internet through Indonesia Official Tourism Website. Time It does not need a long time to access each menu. There is no complaining from the respondent concerning time needed in accessing each menu that are listed in the questions. Consistency The authors analyze that consistency of Indonesia Tourism Official Website is quite high, on each menu we click there are almost no inconsistency in it. The inconsistencies found are the difference of interface when changing the website language and the changed or removed sidebars in different pages. 3) System Fault At the time of testing, some respondents see the mistakes made by the system (Indonesia Official Tourism Website) when performing a search. Almost respondents made the same comment. Comments provided that they complained that the destinations and activities are not sorted based on popularity but on alphabetical order, which indicates that the destinations and activities on top of the list are not the most recommended by Indonesia Official Tourism Website as seen in figure 1 and 2.

Figure 2. Search Activities

4) Language Used Indonesia Official Tourism Website is available in eight languages. All respondent commented that the language used in the website is appropriate and has fulfilled the language grammar. Language that they see is Indonesian and English. B. Problems Encountered At the time of testing, respondents encountered some problems in Indonesia Official Tourism Website. Term The main problems encountered by the respondents and the authors relates to terminology. There are three terms that are rarely heard or found in everyday terms, namely: MICE and Ecotourism. Dissatisfaction of respondents on the use of a specific menu Almost respondents answered question number 3 and 4 with "No", indicating that they feel a certain degree of dissatisfaction with the search for destinations and activities in Indonesia Official Tourism Website since it is not sorted by the most recommended. Difficulty respondents on the use of a specific menu Another problem is some of the respondents had trouble using a particular menu. Some of the respondents find it difficult to use the menu Contact Us and filling in the form on the Register menu because there is reCAPTCHA function. V.

CONCLUSION AND RECCOMENDATION

Usability testing on the interface of Indonesia Official Tourism Web site can be deduced according to the tested usability criteria which include easy to use, easy to learn (terms used, system speed, time, and consistency), system fault, and the language that should be used on the website. The analysis and questioner results show that these criteria have more than 50% for the criteria easy to use, easy to learn, and language used, while for the system fault is less than 50% that indicate that this website is satisfactory, a good screen design, interactive interfaces, and high levels of usability plays an important role to the advancement of a website. Similarly, terms used and the syntax is using the standard language making it easy to understand. Other issues and suggestions from users may be foreseen through this test. A user-oriented system development must this be done by the manager of the web site to get an optimal results.

Figure 1. Search Destinations

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

23

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Below is the advices given by the respondents and the author for the improvement of Indonesia Official Tourism Website: a.

b.

c.

[3]

[4]

Negate the reCAPTCHA on the menu Contact Us and in Register form, because not everyone can understand how to use it.

[5]

Sort the search results of destinations and activities based on the most recommended or on popularity rating.

[6]

Fixing the inconsistency on page layout for each language.

[7] [8]

REFERENCES [1]

[2]

24

Scott W. Ambler, “User Interface Design Tips, Techniques, and Principles”, Second Edition, McGraw-Hill Book Co., Singapore, 2001. J. Nielsen, “Heuristic evaluation”. In J. Nielsen, and R.L. Mack, (Eds.), “Usability Inspection Methods”, John Wiley & Sons, New York, NY, 1994.

CITEE 2012

D. Hix, and H.R Hartson, “Developing user interfaces: Ensuring usability through product and process”, New York: John Wiley and Sons, 1993. J. Nielsen, “Usability Engineering”, Morgan Kaufmann Publishers, 1993. L. Faulkner, “Beyond the five-user assumption: Benefits of increased sample sizes in usability testing”, Behavior Research Methods, Instruments, and Computers, vol. 35, no. 3, pp. 379-383, 2003. W. Quesenbery, “Dimensions of Usability: Defining the Conversation, Driving the Process”, Proceedings of the UPA 2003 Conference, June 23-27, 2003. J. Nielsen, “Usability 101: Introduction to Usability”, http://www.useit.com/alertbox/20030825.html, 25 August 2003. Ministry of Tourism and Creative Economy of The Republic of Indonesia, “Indonesia Official Tourism Web Site”, http://indonesia.travel/, 2008.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

                            !   

"#$  % & #  '&  "  (&   !     #'&     && !   "  #'&  "  (&   !      )* #  +,,-.#)/#"  

01   *&*#  0& * *&*#2 0*&     %  7  &  & 4 *89:";(":;:,..+* "     & # 

 !    <  &    <   1  *(      ! 1     &      1   & *   #    1     &   &  1&    & * (       &  2&  1       1 1        !  1 * (  #      1  2     2   &         *(   1=6  &   1=& / &  *

3                                                   !"#    $  %   !"&%'#(     )                    (       *)   '  %   + '  !%'+#( ,  )       %'+    -.   -   + /   ,          ( 0         ) %'+                 1*  -   + /  , (    %'+   )                              (        *        (        .    )    -  +  /   , (             

    (        .   ) )          %            (           & 2  % !&/2%#  

    ) (

      '    &    #       <     >7 2       2    &  &&     &    2        2    1 /   &    &  3/1 =&     ! ?>    (       &   +  &1          2    &  &&   &    * , /  & 1 & 1  5 @ 5 @&  &    2         2      ""* "(@(!A"     +             2& 6 1      & & 

            

  "* "4(5!("54     

&,..,#& "   &    1            &    &  1   & * ( 1   &    1 6  &/    &               / 2     1      "  * " && & 2  2     &   

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

25

ISSN: 2088-6578

Yogyakarta, 12 July 2012

   &  %  7  & #    4   &    1&  "  ";("*!       # &              &      && & &  * &&&  ,..,3,..8#         &   1  & &      &  &      # & &  # &  #   # &     &   2& &&  1     B9C*  , (          &  &  %  7  &  4 1 .D:";(":;:,..,  (&& %     4   &  & 4 1 +D9:!:,..+ @1        %         #&      7  &  &        & &    & * ( &   1         &  !  "  *  8   /  2' ,*+   2 /  2   < &      &    & & E   1    ";("   1 * " && & 2   &   7  &       (#  2 /  2  &1 &11 2* 

CITEE 2012

  ,.++ =    2     &        &   2&    E 1   6 *  ,      (&<   ' ,*, "    & 2 1        # 2   2       &  2 #  & 1      1 * '& 1      /   *      & #

      # 2   /     /   &        & 1*(2  &<  

  F   &*  F   &  1    #2&  &1  &     1    * (  &&1 ' ,*,*

 ' ,*,

) @ &    B8C

       2  &&  & 1  &1 1 2' ,*8 

 ' ,*+   /  2B+C

     +      2     & &       &   /#2& 1F&   &# #        &" #,..G*( 1# 



 ' ,*8 @&& B,C

 

26

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

' ,*8  22       + &  

&   1    &     &     &    &       * ( &      2 ,  @         #     =&  *@    & #& &# &#  6      &    & #    &    &   / 2    * 8 H   5&   &         2 *   2  & 1          & &     1  =&    * 9 !   (   @            = 2 *5     2    1  &*     !  (      2      

&  /1&* @        5 @&  &  5 @  

)  2& 2"   *         & 1  2      ::222* & **   """* @7(755 5%$  + "  < @   @   #   & &  1 &  & 3         <  1  =& * (         11   & &  *

ISSN: 2088-6578

     &           2 * "  &       2    &  *'           1   & & 1    =&          * 

9      & @   @   #       2   & 21  1 1=&          / &  * (      &     5 @ 5@& & 2  &   2  * 

'        1 # 1 2    &1  2' 8*+ 



,  & (&<   @      &   & &  1 1   &<  #        2  2   * "2  2 &  & 2  7  "    (&   1  #    "   (&<      # "       "      8* (           &  5 (  2 1  1   2 *

 ' 8*+ &     

  "A*@(@@75! @7"((!  +* & && ' 9*+ @ '  9*+ &    =     & &&  2    !    * 



8  (  @      =&    & 1 # &   &          * @ & #21 2 2      1       &* @       &       

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

27

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012



' 9*+ &@&&   !   

 (  &    &       &&  &   &    ! * (  &    5& I &  2* @    # 1  &   &  2  1 &  2&       2 && &2 =    1  &1 #  & / 2  &   & * ( & 1 =&     & 1 2&     1       2 #    &  / 2 =& *   & @&&     !    



 ' G*+    J @&&@& '&(1 '@(K(@;

 '@(K(@; & 1    & 1   

        %@  1  #  1   "  1   E &     & 1F&    E      * '@(K(@; & 1 &          /*    & 2&        & 1* (  /     /   &             & 1   &         /  '@(K(@; &1*            && &  '@(K(@; & 1        "K 7 #     &  "K ("#     %@ &   "K;54" "K";#     "  &   "K;54" "K" #    &&  "K(@[email protected])@@4#         "K)4)@4%#         "K  (           "K (@(! K7 *   

' 9*, &@&&   !   

!    1     5& I 2

" $ (*1 2&   2   5&I2 " "";("*   (   &        "  ;     #   & 1      &  /1&    @  

& 5 @*   A*  "%4"4%(7@(@@75!   "      2   &     & 1 # &   & 1   E &&& * +     J  @&& @&  '& (1'@(K(@;

  

28

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

A"* @7" !

"54        " #$ !" $  %   @    & 1      1 #  =    =& #    (   12&      2 * (   &   & 1  &1   2  + =&  &  =&   &      &   /      &       1     &  1 # 2&   <     2 *(  &   2    *(        ;*(    1&    2/  ,. &  * (          1   <      &* ;    =& &  &  1 ;E  *(   =&  & &1 ' -*+*  

ISSN: 2088-6578



, (   &  (     &      &    F   E    &21        &1 && &2 <       2 * ( F           &   1   &     & 2   2   & *(      & &1 ' -*,

                                            ' -*+=& & ;



 ' -*,(  &  &



8  &    &       &     2     #        =&   #       &   1  2 * (   &    "  ;  * ( &  &  (   &   &11 2 ' -*8*



 ' -*8(  & ;

 @   (   &      2       2     1F&3  #    *(       2   &  & 1    &      1  =&    /  &  *   

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

29

ISSN: 2088-6578

Yogyakarta, 12 July 2012

 !&'  @  (     & #    1    & 1   &   <       5 @   2 * &    2 1 /    & 1 1&   &21   2   * @       /1& &      &   &*"  /1& &2&   & 1 & 1 2&        1 * ( & 1 &    1        2 1  2    1 2 

CITEE 2012





' -*G  % @   % @



 @   2' -*G# 1  2    % @   %   @        &&  ,.+.:,.++  "           8   ! * 5    2 & 1       %   @     ,#I8G      % @  ,#ILI*( 2  &     /      %   @    %  @ &               &*    1 1     2#   &   *7       %   @     %   @ ' -*-* 



' -*9 1 & /1&



'      2 1 #  & 1  & 1  &K/ &     1    /# //# /F/# / /      %@ " * (    1  2&  & 2 &K/ & 1  K  2#     ( "   K("#         &   *@ / & 1 & # =      1        &     & 1 13      &  5 @  2   *  ((    " !     '  (    =          2   * (         6      &    2 1     5 @&  &  5 @ * (  &             %   @#   %   @#  1       %   @#   1      %   @* '  1 2  2   5 @ 6     %   @    %   @*

 ' -*- !   % @   % @

 @  -*-1    2       3          %   @     %    "          &&  ,.+.,.++* (       %   @  && ,.+.,.++ ,*I+  

  %   @   ,*LI,* (     1 &&    

 

30

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

       "      && ,.+.:,.++*

 < &/    *" <      2  2      &    2* (  &             &&1 &  F 1 &     ;#      

 &   * (        2   &  & 1      &    1  =&     & * (             1       %   @   #   1      % @  #     E             #   1 & &     1    &      1   & & &    *



    " !     !(    "  7 2#  2 1      &   <    &       * (    &     &        &&  &      F  /    =     2 * ' #      &        <        & 1   &&  &  1  2   /      2 1   * = &   2 #  &   1&   21 & &    #1& <    1      1   & 1 * /2 #     E    1  &        2    2*   A""* 54 ! "54       &    1  # & 1      2 + 2  ! &     1   &       < &/*        2 #  &   & &  &   =&#      2   1 <         * !   /       &      1 <  *= &  2 #       2& 2  1     ";("   1      =&#       ;*( # &  

ISSN: 2088-6578

,

8

  A"""* '4  

B+C ";("* ,.+.*         ! * # +, ,.++* /  ) / (# ; / 4    1/ "  * ::1 * 1*&*: &  :K(K /K7* B,C   ( 1  * ,..L*    ((   ) ** * B8C " # *7*,..G*   !  +  42 $ / )     # "&*2 B9C "*,.+.*     

   /      * # +. ,.++* /  ) / (# ; / 4    1/ "  * ::  */* *

  



 

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

31

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Denial of Service Prediction with Fuzzy Logic and Intention Specification Hariandi Maulid Department of Informatics Engineering, Politeknik Pos Indonesia Jl. Sariasih 54, Bandung 40151, Indonesia

e-mail address: [email protected] Abstract—Most of existing model focused on DoS attack detection and response. Prediction model towards DoS attack, on the contrary, has not been sufficiently addressed in the literature. In this paper, we will propose an approach to predict the occurrence of denial of service against the availability of the servers or the target machines in order to prevent them from being disrupted to provide services. Since the prediction process contains uncertainty, there is no exact and mathematical solution to solve the problem. Fuzzy logic provides a powerful tool for decision making to handle imprecise data. In addition to fuzzy logic, the prediction model was developed and evaluated based on data provided by NSL-KDD 99 dataset for the use of overall design. This dataset comprises of large amount of normal and attack data each of contains 41 features. We have only taken three most relevant features from dataset by combining two results from previous feature selection work which used Support vector Machines (SVMs) and Information Gain. The model was implemented using fuzzy toolbox provided in MATLAB, while its evaluation has been done using performance metric for predictive model to observe how well the true classification rate and overall accuracy the model can provide.

economic crime and cyber-warfare and even can be much more harmful. Moreover, Yankee Group has stated that, in 2011, a denial of service attack took a 4G network down. The bill of this outage will approximately be a minimum of US$10 million [5].

Keywords: Denial of Service, attack, prediction, fuzzy logic, intention specification, availability.

The rest of paper is organized as follows. Section 2 discuses some existing research from previous work which relevant to the project. Section 3 details the design process of the proposed system starting from the overall procedures, features selection, and technical design of fuzzy model. Ahead before that, a brief overview of Denial of Service attack as well as the notion of fuzzy logic is explained. Next, the result of experiment conducted along with its analysis description is described in section 4. In the final section, conclusion and some possible future improvements both in design and implementation are provided.

I.

INTRODUCTION

Denial of Service (DoS) attack has been known since the early 1980s. However, the first known DoS attack, called SYN Flood, emerged in 1996 which took the New York City Internet Service Provider Panix offline for a week. In the following years, the DoS attack increased in terms of quantity and quality. Such a well-known attack was experienced by Yahoo, eBay, Amazon, Datex, Buy, CNN, Etrade, ZDNet and Dell in February 2000. The attack was launched by a 15-year old Canadian who claimed himself as ‘Mafiaboy’. A year later, a computer worm called ‘Code Red’ took control over 250.000 computers in 9 hours to launch a Dos attack against the official website of White House. In 2003, another worm with DoS payload causing about US$860,000 of losses in the South Korean stock market [1]. According to CSI Computer Crime and Security Survey, in 2004, 39% of respondents reported experiencing the DoS attack incident. The trend then fluctuated up and down, until it reached 29,2% in 2009 as detailed in [2], and [3]. Recent attack such as DoS attack against in Estonia’s and Georgia’s Internet infrastructure [1] as well as Internet-based attacks on critical systems including gas, power and water [4], in 2008, 2009, and 2010 respectively, indicate that the attacks have moved towards 32

In order to mitigate the devastating effect of DoS attack, several defense systems have been proposed. Lots of researches are focused on attack prevention, detection and response. Attack prevention usually aimed to preserve the integrity of hosts and detect abnormal activity [6]. Attack detection performed based on signature matching and anomaly detection [7], while attack response uses packet filtering and attack traceback to identify the attack path [8]. The majority of these schemes give more consideration on identifying attack source and intermediate network units [9]. It is true that locating and identifying of attacking sources are important to counter attacks, however, the efficient prediction of DoS attack to happen in the target machine can give much more advantages to reduce its damaging effect and even preventing attack from happening by taking precaution action based on prediction result.

II.

RELATED WORK

Fuzzy logic has been proved to be a useful tool for decision making to handle and manipulate imprecise data. The notion central to fuzzy systems is that the membership values (in fuzzy sets) are indicated by a value in the range [0.0, 1.0]. Accordingly, there are many systems range from general to computer/network based problem use fuzzy logic as a tool to solve the problem they faced. In the field of intrusion detection system (IDS), the Fuzzy Intrusion Recognition Engine (FIRE), an anomalybased intrusion detection system that uses fuzzy system, is used in [10], [11], [12] to detect malicious activity against computer network. Several fuzzy techniques for DoS detection have also been proposed. Tuncer and Tatar, in DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

[13], proposed a fuzzy logic based system for detecting SYN flooding attack. Their proposed system includes two blocks: packet classification and fuzzy logic system, respectively. In [14], He, Nan, and Liu, used an Adaptive Neuro-Fuzzy Inference System together with the Fuzzy CMeans Clustering Algorithm to detect DoS attacks; they test the method by performing experiments on the DARPA/KDD99 data set. The detection process using cross-correlation functions between incoming and outgoing traffic as inputs to a fuzzy classifier, is another fuzzy-based technique which proposed by Wei, Dong, Lu and Jin. They discuss the resulting trade-off between the accuracy of the detector and the increase in the computational demands when opting for higher dimension in the fuzzy classification [15]. Fuzzy logic also used to implement an intelligent method for real-time detection of DDoS attack [16]. Haslum, Abraham, and Knapskog proposed a Distributed Intrusion Prediction and Prevention System (DIPPS) to detect, prevent, and predict possible intrusions in a distributed network [17] [18]. They work based on an improved Distributed Intrusion Detection System (DIDS) with real time traffic monitoring and online risk assessment. In [19], the authors discussed a model to predict the service rate availability of the server to prevent the server from being overwhelmed by illegitimate traffics. The prediction result then can be used as a reference to take a precaution action against the attack. The study of DDoS attackers intention is presented in [20]. The author uses fuzzy sets theory and concept of entropy to develop an abstract theoretical model to describe availability through the expectation of DDoS attackers. A fuzzy function for defining availability is written as: F = {(t, µ(t)|t € R}

(1)

where t is the response time of process or transaction, and µ(t) is the degree of membership of that time in subjects’ expectation which require that information. In legitimate user expectation, the shorter the time required for the information content to be totally retrieved, the more acceptable the availability of the system. III.

PROPOSED SYSTEM

A. DoS, DDoS, and Fuzzy Logic Overview Denial of Service (DoS) attacks are becoming the major annoyance of today’s Internet. The attacks try to consume the resources of a remote host or network to prevent legitimate users of service from using the desired resources [21]. They are conducted intentionally which are easy to perpetrate but very difficult to cope with. The impact of these attacks can vary from disrupting legitimate users of the affected server to image damage of the server’s operator. To achieve their goal, the attackers send a large message to the target that interfere with its operation, and make it hang, crash or do something useless. There are two main approaches to interfere with a legitimate operation: vulnerability attack and flooding attack. Vulnerability attacks work by sending a few DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

messages crafted specifically to the remote host or application to exploit vulnerabilities present at the victim. Existing software bug in implementation or a bug in a default configuration of a given service are such examples of vulnerability. The process is called exploiting vulnerability, while the messages are called exploit [22]. Ping of Death (PoD), one of the famous DoS attack, exploits existing software defects to crash remote servers or degrade their performances. A regular faulty software upgrade or filtering some suspicious packets can be performed to deter these attacks, however it can be difficult to find new exploit [21]. The second approach, flooding attack, overwhelms the target by sending a large number of messages whose processing consumes some main resources at the target machine such as CPU time, memory, bandwidth, and network resources. Once the important resource is tied up by the attack, the legitimate users cannot access desired services. Since the attackers can send a large different numbers of packets, it can be hard to distinguish them from the legitimate packets which extremely hinder defense [22]. A distributed denial of service (DDoS) attack is a coordinated attack using multiple compromised machines to disrupt the availability of target machine or network [23]. Recall the flooding attack approach to overload the target’s resources. This attack can only be successful provided that the attacker’s machine can generate more traffic than a target machine can handle. For example, let assume that an attacker, using a machine with standard resources, would like to disrupt an operation of site X which has abundant resources. It is then impossible for him/her to generate a large number of messages from a single machine to overload those resources. What to do is to gain control a large number of machines and engage them to generate messages to site X simultaneously, forming a formidable attack network which will be able to overload a target. When we experience the world where we live and use our ability to create order in the mass of information, we will find that the human knowledge becomes increasingly important. There is only a small portion of the knowledge (information) for a typical problem might be regarded as certain, or deterministic which contains no ignorance, no vagueness, no imprecision and no element of chance. The information we have for a particular problem virtually always contains uncertainty. Uncertain information can take on many different forms. There is uncertainty that arises because of complexity. There is uncertainty that arises from ignorance, from various classes of randomness, from the inability to perform adequate measurements, from lack of knowledge, or from vagueness, like the fuzziness inherent in our natural language [24]. The mathematical modelling of fuzzy concepts, a mathematical tool to deal with uncertainty, was presented by Zadeh in 1965. The idea is that set of membership is the most important key to make a suitable decision when uncertainty occurs. There are various methods to represent knowledge of fuzzy systems. The most familiar method to

33

ISSN: 2088-6578

Yogyakarta, 12 July 2012

express human knowledge in these systems is to cast the knowledge into natural language expression as follow IF premise, THEN conclusion

(2)

This expression typically expresses an inference such that if a fact is known (antecedent), then we can infer another fact called a conclusion (consequent). This representation is suitable to the context of linguistic as it expresses human empirical and heuristic knowledge in our language of communication [25]. Moreover, in a fuzzy classification system, a case or an object can be classified by applying a set of fuzzy rule based on a linguistic value of its attributes. Each rule has a weight, which is a number between 0 and 1. It involves two phases. At the first phase is the antecedent evaluation. This includes input fuzzification and applying fuzzy operators. Fuzzification itself is the process of making a crisp quantity fuzzy or we can simply say it is the process of calculating input membership level. On the contrary, the process of constructing output is called defuzzification.The second phase is applying the result to the consequent known as inference. The most difficult part of building a fuzzy classification system is to discover a set of fuzzy rules relating to the specific classification problem [26].

CITEE 2012

is also important that the selected features increase the contrast between normal class and attack class concerning denial of service attack. More importantly, the selected features (metrics) should be in a suitable form in order to be easily used to construct the basis for the fuzzy inputs as well as fuzzy rules generation.

Figure 1. Proposed System Procedure

B. Overall Design Procedure The focus of modelling is to set up relevant fuzzy rules or can be called with fuzzy rule extraction. In respect with that, it can be started by identifying the sources of information for extracting fuzzy rules of prediction model. NSL-KDD dataset used as an initial data and it is divided into two subsets, training dataset and testing dataset [27]. The dataset contains one normal class and four attack classes. The first class is Denial of Services (DoS) with 6 (six) attack types: back, land, neptune, ping of death (pod), smurf, and teardrop. User to Root (U2R), Remote to User (R2U), and Probing Attack are the rest classes. Each class has many records with 41 features determining whether or not the record belongs to the labeled class (normal or DoS attack). Prior to any data analysis, the relevant features to normal or DoS attack must be defined. To do this, ranking methods and information gain are utilized to identify and select the most significant features of denial of service (DoS) attack class. While the selected features of training dataset are used as information to extract fuzzy rules in knowledge repository, the selected features of testing data are used as input to the proposed system to evaluate rules and fuzzy set extracted previously whether or not the data predicted accurately as normal or DoS attack. Fig. 1 depicts the overall proposed system procedures. C. Features Selection The first step of any prediction and detection methodology is to identify the data feeds, or sources, of information for the model. What can be done here is to look for the features and relationship in data. Since the NSL-KDD dataset contains five classes with 41 features, selecting the best data elements is extremely critical to the effectiveness of the overall accuracy of prediction system. It is because the features selection process will extract the metrics that are not quite significant in the raw data and it must be done carefully to help identify whether the data belongs to normal or denial of service attack. Thereby, it 34

A set of related supervised learning that analyze data used for classification or regression is termed as Support Vector Machines (SVMs). In [28], Mukammala and Sung utilized SVM as a ranking method to identify the most significant features from KDD dataset. Three main criteria determined: overall accuracy of 5 classes (1 normal and 4 attack types), training time and testing time. Each of 41 features will be ranked into “Important”, “Secondary”, and “Insignificant”. Using this method, the most significant features for denial of service attack from 41 features of dataset are determined as the following: duration, source bytes, destination bytes, count, same service rate, connections with SYN errors, rerror_rate, destination host count, destination host same source port rate, destination host SYN error rate, and destination host same service error rate. Kayacik and Heywood in [29] analyzed KDD training dataset to obtain feature relevance measure for normal and attack class using information gain. They performed a binary classification, i.e like what typically used in decision tree, and reported the relevant features for normal and some attack class including denial of service attack. According to their finding, source bytes and destination bytes are the most relevant features for denial of service attack class, while duration is the most relevant feature for normal class. Based on the result of two different methods discussed above, the most predictive features to target classes (i.e. normal and denial of service attacks) can be extracted. The two-result then combined to identify which features appeared in both selected features result. If one feature appeared in the first result and at the same time included into the second result, this feature can be assumed to the most important feature. On the other hand, the features which appeared less often can be considered as insignificant features. As final result, the three-feature, DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

duration, source bytes, and destination bytes, has been taken from dataset as the source for the proposed system.

ISSN: 2088-6578

source byte and destination bytes have 6 and 4 parameters respectively. Then, the total number of possible fuzzy rules is 2x6x4= 48 rules.

D. Fuzzy Model for DoS Attack Prediction In previous section, 3 out of 41 features have been selected from the training dataset as input parameters to the fuzzy model, which predict the output variable denial of service attack rate. Any change of any parameter will affect to different result of the denial of service attack rate. Once these three parameters are included to the fuzzy system, one can define their degree of membership functions which called fuzzification process. There are four parameters used to develop this model. Duration, source bytes, and destination bytes are used as input variables, while DoS indicates the output variable. Trapezoidal distribution is used to assign the degree of membership function of the input variables, while triangular distribution is used for output variable.

IF duration is Short AND source byte is Large AND destination byte is Large THEN DoS is High.

Duration represents the length of the connection expressed in second. The value of duration feature in training dataset varies between 0 and 30000 and is divided into two categories of fuzzy set as {Short, Long}. The vector form of this fuzzy set is can be expressed as Dur = {Dur1, Dur2}.

As explained before, there will be 48 rules established in regard with the number of fuzzy set. However, after conducting some experiments, the number of fuzzy rules can be simplified into only 6 rules using a combination of AND and OR logical operator. The complete rules are listed in table1.

Source bytes feature corresponds to the number of bytes transferred from source host to the destination one. Not only did the huge range of its value, which is from 0 to 8,000,000, made this feature much more difficult to arrange, but also the huge variation of its value that indicate whether it is belong to normal or attack class. As a result, this feature has more categories then the previous one. It is divided into six categories as {Very Small, Small, Medium Small, Medium Large, Large, Very Large} and is expressed in vector form as S = {S1, S2, S3, S4, S5, S6}. Destination bytes feature represents the amount of data transferred from destination host to the source one. Its range of 0 to 5,500,000 can be divided into four categories as {Small, Medium, Large, Very Large}. And it is Des = {Des1, Des2, Des3, Des4} in vector form. The output of this model is the prediction of whether the denial of service attack possibility is “low” or “high” according to the value of input set. When the output value is “low”, it means that input parameters are predicted as normal class, while when it is “high”, they predicted as denial of service attack class. Accordingly, this variable divided only into two categories as {Low, High} from its range of 0 to 10. The vector form expression is DoS = {DoS1, DoS2}. After determining fuzzy sets and membership function, the set of fuzzy rules can be established. However, the experiment in defining the relationship between input and output parameters should be done before generating the final set of rules to be used in fuzzy system. This experiment will produce the maximum possible number of fuzzy rules. To define how many possible rules can be generated, the simple multiplication operation between all of fuzzy sets of input parameters is utilized. The more the input parameters categories defined, the more the fuzzy rules will be. As detailed previously, duration feature has 2 parameters, whereas DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Now, let’s define the general expression of fuzzy rules. Rules are expressed as a logic implication a  b, where a represents the antecedent of the rule and b is the consequence of the rule. The fuzzy rules inferred from the input parameters of the dataset can be expressed as: F (Durk, Sl, Desm) Dosx (k € [1, 2], l € [1, 6], m € [1, 4], x € [1, 2]; k, l, m, x € Z) (3) Example rule in logical operation can be expressed as min{µDur1 (x), µS5 (x), µDes3 (x)}

(4)

which can be read in If-then statement as defined in (2):

Table 1. Fuzzy rules for Do Attack Prediction Rule No 1

Dest. Byte

DoS

Small

Medium

Low

OR

Very Large

Low

OR

-

Low

OR

Short

Medium Large Very Large Very Small

Small

High

AND

Short

Medium Small

Small

High

AND

Short

Large

Large

High

AND

Duration Long

2 3 4

-

5 6

Source Byte

Operator

E. Fuzzy Intention Specification Basically, it is difficult to pinpoint what is the motivation behind the launch of denial of service attack. In early stage, the denial of service attack was launched almost with no motivation at all. Most of the attackers did this attack just for fun or simply a proof that something could be done, taking down the popular website or obtaining recognition in the community are such examples of this motivation. Moreover, the fight for supremacy was also considered to be the motivation behind the launch. Using Internet Relay Chat (IRC) the attackers organize multiple attacking machines, trade and exchange illegal code and information with others. On the other hand, recent attacks such as taking down Estonia’s, Georgia’s and Russia’s Internet infrastructure as well as attacks in China, Burma, Kazakhstan Government, Kyrgyzstan, etc, turn the denial of service attacks into politically motivated [2, 30]. Since the majority of victims refuse to admit if they have experienced denial of service attack, the authorities, such as the police, unable to track and catch the suspect. Hence, 35

ISSN: 2088-6578

Yogyakarta, 12 July 2012

it remains hard to determine what the attacker’s intention precisely without knowing who behind the attacks. Unlike other attacks, denial of service attackers do not intend to steal data, gain unauthorized access, or modify the critical setting of victim machines, all what they would like to do is to stop the target machines from offering services to their legitimate users. When an authorized user cannot access information from the source to which he or she entitles to retain, the ultimate goal of the attackers has been reached. Conversely, from users’ perspective, the more the server becomes available, the easier the information can be gained. This contradictive relationship was discussed in [20] as fuzzy model for availability under users’ perspective combined with the amount of information content. Using complement operation, the model indirectly can be inverted to become under attackers’ perspective. The model expressed as follows: µ(t) x A, (under user perspective) µ(t)-1 x (1 - A), (under user perspective) where t is transaction duration, and factor.

(3) (4)

A is availability

Now, let extend and simulate the above models. Suppose there are two input parameters: Information Content Fetched and Latency, and one output parameter called an attack state. When the amount of information can be fetched completely with the short amount of latency, the users’ expectation has been fulfilled. Otherwise, the ultimate intention of the attackers is totally unreached which means that the attack state is nonsuccessful. Based on this contradictive result, the attacker’s intention can be easily defined. F. Hierarchiecal Fuzzy Inference System for Availability Previous section has discussed two fuzzy models: one concerning DoS attack prediction based on the dataset, and the other one modeling the attack state in accordance with the intention of the attackers. Using a hierarchical fuzzy inference system, the rate of server availability can be evaluated based on the output of both DoS attack prediction and intention specification model. Fig. 2 illustrates the architecture of the hierarchical fuzzy inference system for server availability.

CITEE 2012

IV.

EXPERIMENT RESULT

The implementation of prediction model detailed in previous chapter is carried out using fuzzy toolbox provided in MATLAB 7.12.0 (R2011a). For this purpose, Mamdani fuzzy inference system with centroid area (centre of gravity (CoG)) of defuzzification process was used. Fuzzy inference system comprises of a fuzzification interface, a set of rules, a unit of decision making, and defuzzification process. Mamdani fuzzy inference system expecting the output membership functions to be fuzzy sets. The aggregation process will result a fuzzy set for each output variable. This fuzzy set then defuzzified to obtain the real final output. Fig. 3 illustrates an example of performing fuzzy inference system calculation. Firstly, the fuzzy inference system file is loaded into workspace; in this case the file is DosPrediction.fis. Next, training or testing dataset is loaded into workspace to be used as input for fuzzy system that loaded previously. Here, the dataset file is Testing-back.mat which contains back attack data class from testing dataset. Lastly, the DosPrediction.fis then evaluated using testing-back.mat data and resulting the numbers range between 0 and 10 as the output. If the output number is greater than 5 then the data is considered to be attack (DoS is High) which means that the prediction is accurate since back is one of denial service attack type. Otherwise, the prediction is incorrect.

Figure 3. Fuzzy Inference System calculation process

To evaluate the performance of the proposed system, a performance metrics for predictive modeling called a coincidence matrix is used. Usually, when the matrix consists of only two classes, this method used to calculate true positive (TP) rate, true negative (TN) rate, false positive (FP) rate, accuracy, and precision. However, when the classification problem is not binary (e.g. there are more than two class labels), the performance metrics becomes limited to “per class accuracy rates” and the “overall classifier accuracy” [59]. The later method will be used in this project, as the coincidence matrix will consist of six classes: one normal class and five classes of denial of service attack (back, Neptune, ping of death (PoD), Smurf, and teardrop). Table 2 presents the dataset that are used in order to train and test the proposed model. There are in total 15,445 records taken from training 36

Figure 2. Fuzzy Inference System for Server Availability

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

dataset. Table 3 summarizes the true classification rate (TCR) of each class both for training and testing phase.



We have combined the two result of features selection proposed by Mukammala [28] and Kayacik [29] to identify the most relevant features to be used in the proposed model



We have analyzed the pattern of normal and attack data to assign the membership functions of each selected feature.



We have carefully defined a set of six simple rules covering48 possible rules for prediction model.



We have constructed an extension of fuzzier approach for intention of DoS attackers as well as a hierarchical fuzzy inference system for server availability.

Table 2. Total number of records used for training and testing phase

Dataset

Class Label

Training

Testing

Normal

9711

13449

Back

359

197

Neptune

4657

8282

PoD

41

38

Smurf

665

529

Teardrop

12

188

15445

22683

Total

The proposed prediction model by its nature is far from perfect, and at a same time, it is potentially to be improved both in design and implementation phase. The following paragraphs below describe some more ideas which possibly worth investigating for future improvement.

38128 Table 3. True classification rate for each class

True Classification Rate

Class

Training

Testing

Normal

0.9697251

0.9349394

Back

0.9582173

0.9695431

Neptune

0.9997853

1

PoD Smurf Teardrop

1

1

0.8947368

1

0

0

The evaluation of the proposed system both in training and testing phase is able to provide true classification rate for each class in the range of 89% to 100%. Now, based on coincidence matrix and true classification rate, the overall accuracy of the proposed system can be easily calculated. Precisely, the overall accuracy is obtained by the sum of true classification of each class divided by total number of cases. In accordance with that, the following expression represents the calculation of overall accuracy of the system both for training and testing data. Training (Overall Classifier Accuracy)i = 15053/15445 =0.97462 Testing (Overall Classifier Accuracy)i =21614/22683 =0.952872 It can be inferred that the overall accuracy of the system in training phase is 97%, which decreases to only 95% in testing phase. Using a simple average formula, the overall accuracy of denial of service attack prediction model is 96% which means that the proposed method is effective in predicting denial of service attack. V.

ISSN: 2088-6578

It would be interesting to conduct several experiments allowing more features from dataset to be included into the prediction model. For instances, one experiment can be performed using 10 features while others involve 20, 30, and 41 features. Then the overall accuracy resulted from each experiment is compared with each other. It is suggested to use Sugeno type inference system to enhance the efficiency of the defuzzification process. This inference system provides a simplification of computational required by more general Mamdani’s type. The result then may be compared with the previous method to observe whether or not the new system is better than the old one. It would be useful to use real-world network data in order to support the implementation of fuzzy intention specification and fuzzy system to predict server availability. A network data collector or a network intrusion detection system such as Snort can be utilized. Thus, the model performance can be simulated as well as evaluated explicitly. Last but not least, turning the model into application which implemented using one of programming languages may possibly make more real contribution to the society in terms of denial of service attack prediction. ACKNOWLEDGMENT The author would like to acknowledge the significant inspiration from Dr. Theo Tryfonas and Richard Craig of University of Bristol. We would also like to acknowledge the influence of Iwan Syarif and Gunawan Ariyanto of University of Southampton, in providing access to the reference of NSL-KDD dataset, performance analysis as well as fuzzy implementation. REFERENCES

CONCLUSSION AND FURTHER WORK

The main accomplishments of this project were described as follows: DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[1]

G. Loukas, G. Oke, “Potection against Denial of Service attacks: A Survey”, The Computer Journal (2010), Vol: 53, issue: 7, pp.

37

ISSN: 2088-6578

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

38

Yogyakarta, 12 July 2012

1020-1037, first published online August 19, 2009, doi:10.1093/comjnl/bxp078. R. Richardson, “2008 CSI Computer Crime and Security Survey”, accessed online on August 8th 2011, available at: http://i.cmpnet.com/v2.gocsi.com/pdf/CSIsurvey2008.pdf. S. Peter, “ 14th Annual CSI Computer Crime and Security Survey, Executive Summary”, accessed online on August 8th 2011, available at: http://www.personal.utulsa.edu/~jameschildress/cs5493/CSISurvey/CSISurvey2009.pdf BBC News, “Internet-based attacks on critical systems rise”, News Technology, BBC, http://www.bbc.co.uk/news/technology13122339, 19April 2011, last visit 15 May 2011. J. Armitage, “Yankee Group’s 2011 Predictions: 4G fuels the decade of disruption”, Yankee Group Research, Inc, December 2010, accessed online on August 9th 2011, available at: http://web.yankeegroup.com/rs/yankeegroup/images/2011Predicti ons_Dec2010.pdf. R. Mahajan, S.M. Bellovin, S. Floyd, J. Ionnidis, V. Paxson, and S. Shenker, “Controlling high bandwidth aggregates in the networks”, ACM SIGCOMM Computer Communication Review, vol: 32, issue: 3, pp. 62-73, 2002. H. Wang, D. Zhang, and K.G. Shin, “Change-point monitoring for the detection of DoS attacks”, IEEE transaction on Dependable and Secure Computing, vol: 1, issue: 4, pp. 193-208, 2004. A.C. Snoeren, C. Partridge, L.A. Sanchez, C.E. Jones, F. Tchakountio, S. Kent, and W.T. Straver, “Hash-based IP Traceback”, in Proceedings of the 2001 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, ACM SIGCOMM Computer Communication Review, vol: 31, issue 4, October 2001. Y. Jing, P. Tu, X. Wang, G. Zhang, “Distributed-lob-based scheme for IP Traceback”, IEEE 5th International Conference on Computer and Information Technology (CIT ’05), pp. 711-715, 2005. J. E. Dickerson, and J. A. Dickerson, “Fuzzy network profilling for intrusion detection”, in Proceedings of NAFIPS 19th International Conference of the North American Fuzzy Information Processing Society, Atlanta, pp. 301-306, 2000. J. E. Dickerson, J. Juslin, O. Koukousoula, and J. A. Dickerson, “Fuzzy Intrusion Detection”, in Proceedings IFA World Congress and 20th North American Fuzzy Information Processing Society (NAFIPSI International Conference Vancouver, British Columbia 3, pp. 1506-1510, 2001. J. Xin, J. E. Dickerson, and J. A. Dickerson, “Fuzzy feature extraction and visualization for intrusion dhetection”, in the 12th IEEE International Conference on Fuzzy systems (FUZZ’03), volume 2, St. Louis, MO, USA, IEEE Press, pp. 1249-1254, May 2003.\ T. Tuncer, and Y. Tatar, “Detection SYN Flooding attacks using fuzzy logic”, in Proceedings International Conference Information Security and Assurance (ISA’08), Washington, DC, USA, IEEE Computer Society, New York, NY, USA, pp. 321-325, April 2008. H. He, X. Nan, and B. Liu,” Detecting anomalous network traffic with combined fuzzy-based approaches”, Advance in Intelligent Computing, Lecture Notes in Computer Science, Volume 3645, pp. 433–442, 2005. W. Wei, Y. Dong, D. Lu, and, G. Jin, “Combining crosscorrelation and fuzzy classification to detect Distributed Denial of Service attacks”, Computer Science, Computational Science – ICCS 2006, Lecture Notes in Computer Science, Volume 3994, pp. 57-64, 2006.

CITEE 2012

[16] W. Jiangtao, and Y. Geng, “An intelligent method for real-time detection of DDoS bttack based on fuzzy logic”, Journal of Electronics (China), Volume 25, Issue 4, 2008. [17] K. Haslum, A. Abraham, and S. Knapskog, “Dips: A framework for distributed intrusion prediction and prevention using Hidden Markov models and online fuzzy risk assessment”, In 3rd International Symposium on Information Assurance and Security, IEEE Computer Society press, Vol: 1, pp.183–188, 2007. [18] K. Haslum, A. Abraham, and S. Knapskog, “Fuzzy online risk assessment for distributed Intrusion prediction and prevention system”, in Proceedings of the 10th International Conference on Computer Modelling and Simulation UKSIM ’08, IEEE Computer Society Washington, DC, USA, pp. 216-223, 2008. [19] G. Zhang, S. Jiang, G.Wei, Q. Guan, “A prediction-based detection algorithm against Distributed Denial of Service attack”, in Proceedings of the 2009 International Convference on Wireless Communication and Mobile Computing: Connecting the World Wirelessly (IWCMC ’09), ACM New York, NY, USA, pp. 106110, 2009. [20] T. Tryfonas, “An alternative model for information availability: Specifying the intentions of DoS attackers”, in Proceedings of the Conference Security and Protection of Information 2007, Idet Brno, Czech Republic, pp. 121-128, May 2007. [21] D. Moore, G. Voelker, and S. Savage, "Inferring Internet Denialof-Service activity," Proceedings of the 10th USENIX Security Symposium, August 2001, pp. 9–22. [22] J. Mirkovic, “Internet Denial of Service: Attacks and Defense Mechanisms”, Prentice Hall PTR, December 2004, ISBN: 0-13147573-8. [23] R. B. Lee, “Distributed Denial of Service: Taxonomies of attacks, Tools and Countermeasures”, Princeton University, Technical Report, 2004. [24] T. J. Ross, “Fuzzy Logic with Engineering Applications”, Second edition, John Wiley & Sons, Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO 19 8 SQ, England, pp. 652. 2000, ISBN: 0-470-86074-X. [25] 25-S.Savage, D. Wetherall, A. Karlin, and T. Anderson, “Practical network support for IP Traceback”, In Proceedings of 2000 ACM SIGCOMM Conference, August 2000. [26] A. Abraham, R. Jain, J. Thomas, and S. Y. Han, “D-SCIDS: Distributed Soft computing Intrusion Detection System”, Journal of Network and Computer Applications, Volume 30, Issue 1, pp. 81- 98, January 2007. [27] M. Tavalee, E. Bagheri, W. Lu, A.A. Ghorbani, “A detailed analysis of the KDD CUP 99 dataset”, in Proceedings of the 2009 IEEE Symposium on Computational Intelligence Insecurity and Defense Applications (CISDA 2009), Canada, pp. 334-340, December 2009. [28] Mukkamala, and A. H. Sung, “Identifying significant features for network forensic analysis using artificial intelligent techniques”, International Journal of Digital Evidence, Volume 1, Issue 4, Winter 2003. [29] H.G. Kayacik, A.N. Zincir-Heywood, and M. I. Heywood, “Selecting feature for intrusion detection: A feature relevance analysis on KDD 99 intrusion detection datasets”, in Third Annual Conference on Privacy, Security and Trust, St. Andrew, New Brunswick, Canada, October 2005. [30] 30-J. Nazario, “Politically motivated Denial of Service attacks”, in Czosseck, C. & Geers, K. (Eds.) The Virtual Battlefield: Perspectives on Cyber Warfare. Amsterdam: IOS Press, pp. 163181, 2009.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Integrating Feature-Based Document Summarization as Feature Reduction in Document Clustering Catur Supriyanto, Abu Salam, Abdul Syukur Dept. of Postgraduate Computer Science Dian Nuswantoro University Semarang, Indonesia [email protected], [email protected], [email protected]

Abstract—Document clustering used term-document matrix to represents the collection of document. Number of documents leads high dimensionality of term-document matrix. This paper proposed document summarization as feature reduction to reduce the dimensionality of termdocument matrix. We evaluated document summarization in document clustering, compare to feature selection and feature transformation as feature reduction. By comparing the document summarization and other feature reduction, it was found that document summarization improved the accuracy and reduced the time computational of document clustering.

Keywords-component; feature based document summarization; document clustering; feature reduction I. I NTRODUCTION Automatic document clustering is the task of grouping text documents into several different clusters. Automatic document clustering used vector space model (VSM) to represent the collection of documents. Unfortunately, the problem of VSM is the high dimensionality of termdocument matrix [1]. This problem reduces the performance of automatic document clustering. High dimensionality of tem-document matrix can be reduced by stopword removal and stemming. Sremathy and Balamurugan [2] stated the using stopword removal and stemming can improve the accuracy of classifier. Another approach to solve the problem of term-document matrix is feature reduction. Basically, feature reduction can be classified into feature selection and feature transformation. Feature selection is selecting the important terms to be used in clustering and feature transformation is the transformation of high dimension matrix into small dimension matrix. The purpose of both is to reduce the high dimensionality of term-document matrix. By implementing feature selection can speed up the computation of document clustering [3]. In order to overcome the high dimensionality of termdocument matrix, this paper proposed feature-based automatic document summarization. This approach reduces the dimensionality of VSM by selecting the important sentences of a document before the collection of document is preprocessed. The outline of this paper is as follows: section 2 describes the related work. Section 3 describes the method-

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ology of research. Section 4 describes the dataset and shows the performance analysis of proposed approach. Section 5 presents the conclusion and future work. II. R ELATED W ORK The aim of feature reduction is to speed up the time processing of a system without decrease the accuracy. Liu et al. [4] have compared document frequency (DF), term contribution (TC), term variance (TV) and term variance quality (TVQ) as unsupervised feature selection on document clustering. No predefined label on document clustering is the reason of using unsupervised feature selection. The experiment shows that the unsupervised feature selection can improve the accuracy of document clustering. Meng and Lin [5] used feature selection and Latent Semantic Indexing (LSI) via Singular Vale Decomposition (SVD) for text categorization. SVD as the second stage of feature reduction is expected to discover the semantic relationship among text in the collection of document. Xiao-Yu et al. [6] have implemented automatic document summarization on document classification. Automatic document summarization is used to reduce the dimensionality of vector space model and the complexity of categorization. Experiment is carried out on several news dataset. The result of the experiment shows the advantage of automatic document summarization as feature reduction in document classification. III. M ETHODOLOGY In this section we describe our approach to cluster the documents. Firstly, we used feature-based document summarization to select the important sentences that represent the topic. Next, the selected sentences would be preprocessed by using tokenization, stopword removal and stemming to construct term-document matrix. Feature selection and feature transformation also used to reduce the dimensionality of term-document matrix. Fig. 1 shows our proposed document clustering. Finally, clustering algorithm is performed. Detailed description of each stage is explained in the next subsection.

39

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

feature selection. This paper focused on unsupervised feature selection, since there is no predefined label in document clustering. The unsupervised feature selection methods include Document Frequency (DF), Term Contribution (TC), Term Variance (TV) and Term Variance Quality (TVQ). We used TC to select our feature, since TC has good performance than others [4]. TC can be computed by using (3). T C(t) =

X

f (tk , Di ) × f (tk , Dj )

(3)

i,j∩i6=j

Fig. 1.

Clustering Process

where f (tk , Di ) = T Fj × log(

A. Feature-Based Document Summarization Document summarization can be classified into extractive and abstractive summarization [7]. Extractive summarization gives a score for each sentence in a document and selects the important sentences that have high score. Abstractive summarization attempts to understand the concept of each sentence and used natural language processing to change each sentence. We focused on feature-based document summarization. This paper used eight feature namely title feature, sentence length, term weight, sentence position, sentence to sentence similarity, proper noun, thematic word and numerical data. Detailed explanation of these features shown in Suanmali et al. [8]. B. Preprocessing Text preprocessing used in this paper are tokenization, stopword removal and stemming. We stemmed the terms using Porter stemming algorithm. C. Term Weighting Term weighting is performed to transform text document into numeric. Term Frequency Inverse Document Frequency (TFIDF) is used to measure the important of term in document collection [9]. TFIDF is computed as (1). T F IDF = T F × IDF

(1)

N ) (2) DF where T F is the number of term i in a document, N is the number of document, DF is the number of document containing term i and IDF is Invers Document Frequency. IDF = log(

D. Feature Selection The aim of feature selection is to select the important features or terms that can be used to document clustering. By applying the appropriate feature selection, the using of small features can improve the performance of document clustering, especially time performance. There two types of feature selection, namely supervised and unsupervised

40

N ) DFj

(4)

T F is term frequency of a term j, N is the number of documents, DF is the number of document contains term j. E. Latent Semantic Indexing Latent Sematic Indexing (LSI) via Singular Value Decomposition (SVD) is used to transform high dimension term-document matrix into small dimension termdocument matrix. SVD attempts to find semantic corresponding between term and document in the collection of document [5]. Let A is the term-document matrix of size m × n where m is the number of term and n is the number of document. The singular value decomposition of term-document matrix A can be defined as (5). A = U ΣV T

(5)

where U is term vector, Σ is the diagonal matrix of singular value and V T is document vector. Next, matrix V T is used to cluster document collection. F. Clustering Algorithm K-means is a clustering algorithm to cluster the document collection. K-means attempts to classify document into k-cluster [10]. K-means will randomly picking K document as centroid. Then compute the similarity distance between each document and centroid. Documents that have higher distance will be placed in the same cluster. New centroid will be determined when all data has been placed in the nearest cluster. The process of determining the centroid and the placement of data within the cluster is repeated until the centroids converge. Table I shows the pseudocode of K-means algorithm [11]. Cosines similarity of two documents is defined using (6). ΣwA × wB p Cosines(dA , dB ) = p Σ(wA )2 × Σ(wB )2

(6)

Where w(dA ) and w(dB ) is word in document A and document B, wA and wB is the T F IDF value of each term in document A and document B.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

TABLE I K-M EANS A LGORITHM Input

Document collection D = {d1 , d2 , d3 , , dn } Number of cluster // k

Output

k cluster

Process

1. Choosing documents to be k initial centroid (cluster center) randomly 2. Calculating the distance of each document to each centroid using cosines similarity, document that has the closest distance to the centroid is placed in the same cluster with the centroid. 3. Determining the new centroids 4. Return to step 2 if the new centroid is different from the previous centroid.

IV. E XPERIMENTS

Fig. 2.

Accuracy of different feature reduction

This paper used Lucene as java library to implement the methodology. Lucene has provided standard text preprocessing such as tokenization, stopword removal and stemming. A. Dataset We have used total 300 documents collected from yahoo news. The collection documents belonging five different classes (sport, economy, politic, entertainment and business). Each class contains 30 documents. These documents clustered by using K-means clustering algorithm. Random selection is used to select the initial centroid of the clustering algorithm. We executed 5 times to obtain the average result of performance. B. Evaluation Measure In order to evaluate the quality of document clustering, we employ F-measure as the standard evaluation measurement widely used in document clustering. F-measure is the combination between recall and precision. The recall, precision and F-measure is defined as (7), (8) and (9) respectively. Recall(i,j) =

Ni,j Ni

P recision(i,j) =

Ni,j Nj

(7) (8)

2 × Recall × P recision (9) Recall + P recision where Nij is the number of document of class i in cluster j, Ni is the number of document of class i and Nj is the number of document of cluster j. F − measure =

C. Experiment Result Our experiment attempts to evaluate the influence among feature selection (FS), feature transformation (LSI) and document summarization as feature reduction in document clustering. In our experiment, we summarized documents into 30%, 50% and 80%. For feature selection, we used 20% and 40% of a number of terms. The minimum of the number document and the number of term is used to set the parameter of k in LSI via

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Fig. 3.

Time taken of different feature reduction

SVD. The influence of each feature reduction is compared to the original k-means. There is no feature reduction in original k-means. Fig. 2 presents the F-measure of different feature reduction in document clustering. As we have shown in this evaluation, the using of feature-based document summarization has improved the accuracy of original K-means when we summarized the documents into 30%, FS 20% and LSI. Otherwise, the accuracy of document clustering was decreased when we summarized documents into 80%, FS 20% and LSI. Result presented in Figure 4 demonstrated that the using of document summarization as feature reduction can reduce the time computation of document clustering. By using 30% document summarization, FS 20% and LSI, the computation of original k-means clustering (12.300 milliseconds) can be decrease into 47% (6.500 milliseconds). V. C ONCLUSION This paper studied and evaluated the influence of feature-based document clustering as feature reduction in document clustering. The experimental shows the effectiveness of document summarization as feature reduction. Compared to original k-means, the accuracy of document

41

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

clustering can be increased and the time computation can be decreased by implementing feature-based document summarization in document clustering. As future work, we plan to evaluate and compare another document summarization approach as feature reduction in document clustering. R EFERENCES [1] M. Thangamani and P. Thangaraj, “Integrated clustering and feature selection scheme for text documents,” Journal of Computer Science, vol. 6, no. 5, pp. 536–541, 2010. [2] J. Sreemathy and P. S. Balamurugan, “An efficient text classification using knn and naive bayesian,” International Journal on Computer Science and Engineering, vol. 4, no. 3, pp. 392–396, 2012. [3] M. K.Mugunthadevi, M. S. Punitha, and D. Punithavalli, “Survey on feature selection in document clustering,” International Journal on Computer Science and Engineering, vol. 3, no. 3, pp. 1240– 1244, 2011. [4] L. Liu, J. Kang, J. Yu, and Z. Wang, “Comparative study on unsupervised feature selection methods for text clustering,” in Proceeding of NLP-KE’ 05, pp. 597–601, 2005. [5] J. Meng and H. Lin, “A two-stage feature selection method for text categorization,” in Proceeding of Seventh International Conference on Fuzzy Systems and Knowledge Discovery, pp. 1492– 1496, 2010. [6] J. Xiao-Yu, F. Xiao-Zhong, W. Zhi-Fei, and J. Ke-Liang, “Improving the performance of text categorization using automatic summarization,” in Proceeding of International Conference on Computer Modeling and Simulation, pp. 347–351, 2009. [7] V. Gupta and G. S. Lehal, “A survey of text summarization extractive techniques,” Journal Of Emerging Technologies in Web Intelligence, vol. 2, no. 3, pp. 258–268, 2010. [8] L. Suanmali, N. Salim, and M. S. Binwahlan, “Fuzzy logic based method for improving text summarization,” International Journal of Computer Science and Information Security, vol. 2, no. 1, 2009. [9] S. K. and N. N., “Semantically enhanced document clustering based on pso algorithm,” European Journal of Scientific Research, vol. 57, no. 3, pp. 485–493, 2011. [10] M.-U.-S. Shameem and R. Ferdous, “An efficient k-means algorithm integrated with jaccard distance measure for document clustering,” in Proceeding of First Asian Himalayas International Conference on Internet, 2009. [11] M. H. Dunham, Data Mining Introductory and Advanced Concepts. Pearson Education, 2006.

42

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

A GPGPU Approach to Accelerate Ant Swarm Optimization Rough Reducts (ASORR) Algorithm Erika Devi Udayanti1, Yun-Huoy Choo2, Azah Kamilah Muda3, Fajar Agung Nugroho4 Faculty of Computer Science - Dian Nuswantoro University1,4 Jl Nakula I No. 5-11 Semarang 50131, Indonesia Faculty of Information and Communication Technology2,3 University Teknikal Malaysia Melaka, Hang Tuah Jaya, 76100 Melaka, Malaysia [email protected], [email protected], [email protected], [email protected]

Abstract— Reducts can be used to discern all discernible objects from the original information system. In order to find a reducts, a such applications of rough set uses a discernibility matrix. Ant Swarm Optimization Rough Reducts (ASORR) algorithm is used in rough reducts calculation for identifying significant attribute set optimally. For the complex matrix calculation in a single cpu, it will take a long computing time to build the discernibility matrix whereas the execution time of an algorithm is needed to be considered. This paper proposed an parallel approach to accelerate the execution time of ASORR algorithm which is utilizing GPGPU that supports high speed parallel computing. It will be implemented with CUDA from NVIDIA. The experiment results indicate that parallel ASORR achieve the acceleration on GPGPU. Keywords- rough reducts, ant swarm optimization, parallel approach, GPGPU, CUDA

I.

INTRODUCTION

Nowadays, the emergence of data mining is an intelligent solution to explain the process of extracting information in a large-scale database, and one of the data mining’s purpose is for the knowledge discovery [1]. Knowledge Discovery in Database (KDD) refers to the overall process of discovering useful knowledge from a large datasets, and data mining becomes a particular element in this discovering process [2],[3]. According to Hrudaya Ku, the stored data in the database cannot be detached from the possible human errors in the collection process or for other reasons [3]. Therefore, it forces such data full of uncertainty and Rough Set Theory is deal with this uncertainty [3],[4],[5]. To discern all discernible objects from the original information system, reducts is used. As mentioned by Pratiwi et.al., that “reducts is the process of reducing an information system such that the set of attributes of the reduced information system is independent and no attribute can be eliminated further without losing some information from the system” [5]. In such applications of rough set theory, there will be a finite set of objects describing a finite set of attributes. In order to find a reducts, such applications of rough set use a discernibility matrix. Information table represents object values on attributes with objects are represented by rows and attributes is represented by columns [6].

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Skowron developed a distinct theoretical in reducts based on a discernibility matrix [7]. The discernibility matrix defines by Skowron is “a matrix representation for storing the sets of attributes that discern pairs of objects”. In which, if there are two objects that have different value in at least one attribute, thus two objects are discernible. Both the rows and columns of the matrix correspond to the objects. An element of the matrix is the set of all attributes that distinguish the corresponding object pairs, namely, the set consists of all attributes on which the corresponding two objects have distinct values. The ant swarm optimization for rough reducts (ASORR) algorithm, proposed by Pratiwi et.al, [5], can effectively enhance the optimization performance in the rough reducts calculation. This algorithm refers to the hybrid two population-based algorithm, which are particle swarm optimization and ant colony optimization [8]. However, for the complex matrix in a single CPU will take a long computing time to build the discernibility matrix. Whereas the execution time of an algorithm is needed to be considered. In the use of rough reducts, the lack of time complexity analysis of an algorithm will be found [3]. To speed up the computations of reducts, a parallel approach is proposed in this study. Currently, the Graphical Processing Unit (GPU) transforms into GPGPU, shifts of graphic centric into non graphic centric [9],[10]. A lot of research gouging the capabilities of the Personal Computer’s graphics card to achieve acceleration by utilizing the parallel execution of GPU. Computational task that originally performed by the CPU, can now be executed by the GPU. Unlike the CPU which is only capable of performing sequential execution with multi-cores processor, at a time which only execute one operation. GPU is able to execute in a time simultaneously with the many-cores processors. Some previous research has been conducted on the related works to speed up the execution time of algorithms. Zhou You, et.al., uses GPGPU to parallizing PSO and its runing time is greatly shortened compared to standard PSO in CPU [11]. Shiuan You used ant colony optimization (ACO) that has parallel structure for travelling salesman problem (TSP) and implement into parallel execution and it used NVIDIA’s CUDA for implementing parallel ACO [12]. Wenbin Fang et.al., parallelizing apriori of frequent item set (FIM) on GPU. Bitmap data structure is used to explore the parallelism on GPU in order to accelerate the frequency counting operation [13].

43

ISSN: 2088-6578

Yogyakarta, 12 July 2012

The aim of this paper is to accelerate the execution time of ASORR algorithm with a parallel approach. It will be implemented on GPGPU technology with a software platform which is CUDA from NVIDIA. This paper is organized as follows. Section 1 summarizes the motivation. The next section outlines the preliminary concepts of ant swarm optimization for rough redutcs. Section 3 introduces the GPGPU technology. Section 4 explains the CUDA environment. The proposed framework of parallel approach to accelerate the ASORR algorithm is described in the section 5. The experimental result will be discussed in the section 6. The final section is a conclusion of the paper.

II.

ANT SWARM OPTIMIZATION FOR ROUGH REDUCTS (ASORR) ALGORITHM

In order to do the optimization of rough reducts, a hybrid algorithm which is Ant Swarm Optimization Rough Reducts (ASORR) algorithm has been proposed by Pratiwi et.al [5]. The algorithm is the hybridization of two swarm algorithms which are Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO). In the first stage, PSO is applied to do the global optimization and for updating positions of particles, and the ACO is used to get the feasible solution space.

CITEE 2012

According to Globa et.al in [14], “Parallel computing is the process of data processing, in which several machine operations can be performed simultaneously”. In other word, it is the simultaneous usage of doing computational operations together in resolving the computational problems. Many-core design paradigm is focused on the optimization execution of parallel code. Many-core processors are characterized by a processing unit comprised of a large number of “light-weight” cores. Initially, the graphics card is designed to handle image and graphic processing on personal computer. It is required compute intensive and highly parallel computing. In recent years, the graphical processing unit has moved its purpose to the non-graphics and general purpose computing application. The evolution of the GPU is mentioned in [15], namely hardware programmability and software development evolved together to transform the graphics card into GPGPU. There are some advantages of GPGPU compared to CPU in terms of the performance such as (1) GPU devotes more transistors to data processing rather than data caching and flow control, which enables it to do much more float-point operations per second than CPU, (2) it is especially well-suited to solve problems that can be expressed as data-parallel computations with high arithmetic intensity – the ratio of arithmetic operations to memory operations.

According to Shelokar, P.S., et al [8], in the Ant Swarm approach, a simple pheromone-guided search mechanism of ant colony is implemented which acts locally to synchronize positions of the particles of PSO to quickly attain the feasible domain of objective function [5]. Assume that a large feature space full of feature subsets where each feature subset can be seen as a point or position in such a space. If there are N total features, then there will be 2N kinds of subset, different from each other in the length and features contained in each subset. The optimal position is the subset with least length and highest classification quality. A particle swarm is put into this feature space, each particle takes one position. The particles fly in this space, the goal is to fly to the best position. The positions are change over time, communicate with each other, and search around the local best and global best position. Eventually, they should converge on good, possibly optimal, positions. It is the exploration ability of particle swarms that should better equip it to perform feature selection and discover optimal subsets [8]. The flow of the whole algorithm is shown in the Figure 1.

III.

GPGPU TECHNOLOGY

Traditionally, as software is written in serial computation. Software is running on a single computer that has a single CPU. The problem is resolved into discrete serial instruction. The instructions are executed sequentially. Once an instruction is completed, the new instruction is executed. Thus only one instruction is executed within a time. SIMD concept in GPU runs the same command at the same time for each processor in a multiprocessor.

44

Fig. 1. Ant swarm optimization for rough redutcs algorithm

Figure 2 shows the different architecture of CPU and GPU. GPU is designed for highly data-parallel computations with high arithmetic intensity, and CPU

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

devotes more transistors to data caching and flow control. GPU devices are organized as an array of Streaming Multiprocessors (SMs). These SMs are comprised of a number of Stream Processors (SPs), which is capable execute many hundreds of threads simultaneously. Due to these SPs are grouped into SMs which is massively threaded, the threads executing on the SPs of a single SM are able to cooperate and share instruction cache and control logic, as well as a relatively small amount of onchip, low-latency memory.

ISSN: 2088-6578

platforms, availability of documentation and examples, fully supported by NVIDIA, an availability inbuilt libraries, debugging with advanced tools, support the existing construct of C/ C++ [21]. Moreover, CUDA programming model supports merging operations execution on the CPU and GPU. CUDA consists of both hardware and a software model allowing the execution of computations on modern NVIDIA GPUs in a data-parallel model.

Fig. 3. CUDA Architecture Fig. 2. The different architecture of CPU and GPU

As mentioned above that GPU‟s structure is different with the CPU. Core processing CPU is limited, while the GPU is able to have hundreds of small processing units, making it suitable for parallel applications run very efficiently. Although the evolution of GPUs capable of performing computational and execution of such operations performed by the CPU, but on a particular application still requires execution on both CPU and GPU. CPU executes the sequential part, and the GPU will perform numerical processing. Therefore, it needs a mediator to be able to access both of them.

IV.

CUDA TECHNOLOGY

To be able to maintain and take advantage of the gpu which is general purpose computation, there are some kind interfaces involves Open Computing Language (Open CL), Direct Compute, and also CUDA. All those interfaces have same characteristic although in the other side it has different programming interfaces. Open CL introduces by the Khronos Group which is a aparallel programming used in CPU, GPU, Digital Signal Processors (DSP), and some other processor’s types [16]. Direct Compute is an API which is standart GPU computing platform for windows, namely Windows 7 and Windows Vista, it means this API can not be used in different platform, limited library, examples, and bindings [16],[17]. While CUDA introduces by NVIDIA as a new parallel programming model, namely the general purpose parallel computing architecture for NVIDIA’s graphic cards or GPU [18],[19],[20]. Refers to Jayshree et.al, CUDA is better than Open CL and Direct Compute based on some considerations involve a flexibility on various

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

“NVIDIA CUDA is a SDK (Software Development Kit) released by graphic hardware manufacture NVIDIA with the purpose of making it possible for programmers to accelerate general-purpose computations by using the computational resources available to modern GPUs” [22]. CUDA programming is an interface to be able to access the GPU parallel computing capabilities by writing code that runs directly on the device. In term of CUDA, the GPU is called device, whereas the CPU is called host. The CUDA architecture is shown in Figure 3 which comprises from several parts [23]. A number of optimized libraries for CUDA provided by NVIDIA, such as FFT, Blas, math.h, and so on. The main think of CUDA architecture is NVIDIA C compiler (NVCC). As mentioned earlier that the CUDA program is a mix code of GPU and CPU which is isolated the code of GPU and CPU by NVCC compiler. CPU will compile the CPU code. And for the GPU code, it is compiled into GPU's computing assembly code-PTX. GPU code that runs is supported by the CUDA driver. The emerging of CUDA as a programming model is an extension of the C language written especially for the NVIDIA graphic card. In the GPGPU environment, the basic programming language used was on C or C++ based [23]. In the other hand, object oriented programming model are widely increase, it is natural to explore how CUDA-like capabilities can be made accessible to those programmers as well [24]. Yonghong Yan et.al., proposed a programming called “JCuda” to use by the java programmers for invoking CUDA kernels [24]. Java Native Interface (JNI) is used as a bridge to CUDA through C in the java programming environment [24]. The development process for accessing CUDA via JNI involves coding the java code and JNI scrap code in C to be executed on CPU host, either execute CUDA code

45

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

on the GPU device. Besides that invoking CUDA kernel, memory allocation process, memory freeing, and data transfer between host and device are required also. Problem

V.

THE FRAMEWORK OF PARALLEL ASORR WITH JCUDA BINDINGS

The proposed framework of this study is refers to the CUDA framework proposed by Yanwei Zhao et al. [25]. The framework is divided into 3 stages, CPU part as the initialization of sequential ASORR algorithm, GPU part as the parallel construction of discernibility function of ASORR algorithm, and the last stage is the CPU part which is PSO/ACO optimization. GPU will process the data from the CPU to perform parallelization on the sequential looping process to parallel process. A set of combination vector is used to represent the dataset, and then it will be transferred from CPU to GPU in order to do parallelization computation of discernibility function. The result will be transferred back to the CPU. The result of the calculation which is done in GPU will be transferred back to the CPU, to be stored and displayed. Figure 4 shows step by step of the whole flow process of parallel approach for the ASORR algorithm. Flow of the development process presented here starts with truncating the ASORR process in the discernibility function computation. Generally, in the CPU there is a set matrix in sequential form which is under looping process. CPU-host initializes existing matrix pattern that formed in the array vector computation. The host allocates the array that will execute on the GPU. The allocated array is transferred (copy) from the host to the device. In the matrix allocation process, initially it is allocating the size of matrix that used in the sequential execution, and then transforming this allocation into a set of matrix that should be compute in the GPU. Blocks and threads have the structure of two dimension matrix. And this study converting two dimensional matrix into one dimensional matrix to declare the device variables in the device process. It is due to the jcuda programming structure that only support for one dimensional matrix. To do the computations, the host needs to invoke the GPU kernel function that followed by a number of threads in the same time. Those threads will execute the kernel function. Thereafter, the computation results generated from GPU is transferred back to the host and to display. The relevant code to call kernel function is as follow: // Call the kernel function. int blockSizeX = 512; int gridSizeX = (int)Math.ceil((double)numThreads / blockSizeX); cuLaunchKernel(function, gridSizeX, 1, 1, // Grid dimension blockSizeX, 1, 1, // Block dimension 0, null, // Shared memory row and stream kernelParams, null // Kernel- and extra parameters); cuCtxSynchronize();

The CPU initializes an array

Copy array to the device

Invoke GPU kernel function

The CPU initializes an array

Fig. 4. Development flow of parallel approach

After all the initialization part and the kernel invocation is done, and then the parallel code should be executed. The code will be compiled into the form that GPU can be executed which is .cu file. In the original CUDA which is C-based language, the original code can be compiled directly to be processed by the GPU. In this study, the jcuda code should be compiled into ptx file first. It will compile by the nvcc into ptx file to be able to acces the GPU to do the computations so the GPU can process the computation. The execution flow of compiling the CPU-code into ptx file by using nvcc is as follow: // Create the PTX file by calling the NVCC String ptxFileName = preparePtxFile("JCudaIMPosKernel1D.cu"); // Initialize the driver and create a context for the first device. cuInit(0); CUcontext pctx = new CUcontext(); CUdevice dev = new CUdevice(); cuDeviceGet(dev, 0); cuCtxCreate(pctx, 0, dev); // Load the ptx file. CUmodule module = new CUmodule(); cuModuleLoad(module, ptxFileName);

VI.

EXPERIMENTAL RESULT

In the experimental, it is applying parallel ASORR on several datasets. The experiment is implemented on Intel Quad Core with NVIDIA GeForce560 display card, and with 1GB main memory. The average performance is reported in this paper. The experiment results indicate parallel ASORR achieve the acceleration. The enumeration time taken in

46

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

this experiment is measured from the processing of Discernibility Function until the reducts is obtained. In table 3, indicated the execution time of the parallel ASORR algorithm for each dataset that was ran in the GPU, as well as sequential ASORR algorithm ran in the CPU. The running time of ASORR algorithm is more affected by the size of data than the problem dimension (feature numbers) in the PSO part and the most of time is spent on the basic computations of rough sets which is computing equivalence classes [26]. The result appears that parallel ASORR algorithm spent the execution time relatively less compared with the sequential ASORR algorithm in the larger dataset. TABLE I. No

EXECUTION TIME OF ASORR-CPU AND PARALLEL ASORR-GPU

Dataset

Features

Instances

[2]

[3]

[4]

[5]

[6]

[7] [8]

Execution Time (seconds) ASORR-

PASORR-

[9]

CPU

GPU

[10]

1

Corral

6

64

0.5016

0.557

2

Lung

56

32

0.8044

0.4658

3

Soybean-small

35

47

0.8702

0.5655

4

Zoo

16

101

1.0867

1.1617

5

Lymphography

18

148

9.7274

2.7392

2.6

1.1

Average

[11]

[12]

[13] [14]

The significant reduced-execution time appear in Lymphography dataset followed by Lung and Soybeansmall dataset accordingly. The average decreasing execution time reaches 2.5436 seconds. In the corral and zoo datasets, execution time of parallel ASORR is increased, since the two datasets needed more time in the first compiling sequential code into PTX file to invoke the kernel. Improved average time of compiling PTX file is 0.28 seconds. Therefore it leads to the increment of overall GPU execution time which is higher compared to the CPU execution time.

[15]

[16] [17]

[18] [19] [20]

VII. CONCLUSIONS In the paper, the concepts of reducts in rough set theory and the hybrid optimization algorithm were discussed. Proposed framework of the parallel ASORR algorithm has been done within GPGPU technology. It is done on the GPU card that provided by NVIDIA, by using CUDA platform as a programming model and JCuda bindings for JNI. From the generated result, this framework is able to accelerate the execution time by parallelizing the discernibility function computation.

[21]

[22] [23] [24]

[25]

REFERENCES [1]

J. Han and M. Kamber, Data mining: concepts and techniques, vol. 54. Morgan Kaufmann, 2006, p. 258.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[26]

ISSN: 2088-6578

U. Fayyad, G. Piatetsky-shapiro, and P. Smyth, “From Data Mining to Knowledge Discovery in,” AI Magazine, vol. 17, no. 3, pp. 37-54, 1996. H. K. Tripathy, B. K. Tripathy, and P. K. Das, “An Intelligent Approach of Rough Set in Knowledge Discovery Databases,” Engineering and Technology, pp. 215-218, 2007. N. Shan, W. Ziarko, H. J. Hamilton, C. Science, and R. Regina, “Using Rough Sets as Tools for Knowledge Discovery,” pp. 263268, 1995. L. Pratiwi and Y. Choo, “An Empirical Study of Density and Distribution Functions for Ant Swarm Optimized Rough Reducts,” Software Engineering and Computer, pp. 590-604, 2011. Y. Yao and Y. Zhao, “Discernibility Matrix Simplification for Constructing Attribute Reducts,” vol. 179, no. 5, pp. 867-882, 2009. C. R. A. Skowron, The discernibility matrices and functions in information systems, In: R. Slo. 1992. P. Shelokar, P. Siarry, V. Jayaraman, and B. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 129-142, May 2007. J. Nickolls and W. J. Dally, “The GPU Computing Era,” Ieee Micro, pp. 56-70, 2010. D. Patterson, In Praise of Programming Massively Parallel Processors : A Hands-on Approach. 2010. Y. Zhou and Y. Tan, “GPU-based parallel particle swarm optimization,” 2009 IEEE Congress on Evolutionary Computation, no. 1, pp. 1493-1500, May 2009. Y. You, “Parallel Ant System for Traveling Salesman Problem on GPUs,” Eleventh Annual Conference on Genetic and, pp. 1-2, 2009. W. Fang, M. Lu, X. Xiao, B. He, and Q. Luo, “Frequent Itemset Mining on Graphics Processors,” Data Management, 2009. L. S. Globa, K. Iermakova, and T. Kot, “Parallel Computing Process Algorithm,” Computing, vol. 1, pp. 467-469, 2008. K. S. Perumalla, “Discrete-event execution alternatives on general purpose graphical processing units (GPGPUs),” Work, 2006. N. G. Dickson and F. Hamze, “A Performance Comparison of CUDA and OpenCL,” Read, no. 1, 2011. C. San Jose, “Direct Compute Bring GPU Computing to the Mainstream,” GPU Technology Conference, 2009. [Online]. Available: http://www.nvidia.com/content/GTC/documents/1015_GTC09.p df. J. Nickolls, “NVIDIA CUDA software and GPU parallel computing architecture,” Microprocessor Forum, May, 2011. G. Ruetsch and B. Oster, “Getting Started with CUDA What is CUDA ?,” 2008. E. K. Sanders. Jason, “CUDA by Example, An Introduction to General Purpose Graphical Processing Unit,” Review Literature And Arts Of The Americas, 2010. J. G. J. P. M. K. A. Bawaskar, “GPGPU Processing In CUDA Architecture,” International Journal, vol. 3, no. 1, pp. 105-120, 2012. R. M. Weiss, “GPU-Accelerated Data Mining with Swarm Intelligence by,” Intelligence, no. May, 2010. L. Wenna, “A CUDA-based Multi-Channel Particle Swarm Algorithm,” Electrical Engineering, 2011. Y. Yan, M. Grossman, and V. Sarkar, “JCUDA : A ProgrammerFriendly Interface for Accelerating Java Programs with CUDA,” pp. 887-899, 2009. Y. Zhao, Z. Cheng, H. Dong, J. Fang, and L. Li, “FAST MAP PROJECTION ON CUDA 1. Institute of Computing Technology , Chinese Academy of Sciences 2 . Graduate University of Chinese Academy of Sciences { zhaoyanwei , chzhl , donghui , fangjy }@ ict . ac . cn,” Matrix, no. 2009, pp. 4066-4069, 2011. X. Wang, J. Yang, X. Teng, W. Xia, and R. Jensen, “Feature selection based on rough sets and particle swarm optimization,” Pattern Recognition Letters, vol. 28, no. 4, pp. 459-471, Mar. 2007.

47

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Loose-Coupled Push Synchronization Framework to Improve Data Availability in Mobile Database Fahri Firdausillah Faculty of Computer Science Dian Nuswantoro University Semarang, Central Java, Indonesia Email: [email protected] Abstract—Mobile databases have to synchronize with server-side database to keep their local data up-to-date. The synchronization can be performed in two ways: (a) synchronization initiated by client periodically that is called pull synchronization, and (b) synchronization initiated by server whenever new data is available, which is called push synchronization. Pull synchronization is easier to implement but has several drawbacks, such as wasting resources and the client will not aware new data in server-side database until next scheduled synchronization. The push synchronization enables mobile application to use newest data in near to real-time synchronization, since the server will initiate synchronization process whenever new data are arrived in server. In many cases, to adopt push synchronization in currently-used system, the developer has to change the existing application which is impractical and error prone. This paper presents synchronization framework to implement push synchronization in existing system without even rewrite a single code in the application. The framework employs open technologies both in the synchronization technique (SyncML) and the push communication protocol (Jabber Protocol). As a result of using open technologies, presented framework can be implemented in any programming language and will be compatible with any mobile device platforms. Keywords-component; Synchronization, Mobile Database, SyncML, XMPP, Push Technology

I.

INTRODUCTION

Mobile devices are now widely used for replacing computer when the user is away from his/her desktop/laptop computer [1]. Many desktop applications are available in modern mobile device such as iPhone and Android compatible smartphones. Currently, many mobile devices have processor higher than 1 Gbps and storage device more than 8 GB. This enables mobile devices to run complicated software such as mobile customer relationship management [2]. However mobile devices are relying on wireless access that have high chance of disconnection[3]whether intentionally (e.g. when in plane) or not (e.g. out of coverage, run out of balance, etc.). This connection problem will distract user’s works that need the Internet connection. To overcome that problem, mobile database is used to cache required server-side data in mobile devices so that mobile application can still be used even there is no connection to server-side database. To ensure the mobile applications are using the latest data, mobile database has to be synchronized to server-

48

Norhaziah Md. Salleh Faculty of Information and Technology Universiti Teknikal Malaysia Melaka Durian Tunggal, Malacca, Malaysia Email: [email protected] side database. That synchronization event can be performed either pull-based or push-based. In pull synchronization, mobile application initiate the event by sending synchronization request on certain scheduled time, whereas push synchronization is initiated by server application whenever new data is available in server-side database. Pull-based synchronization is easier to implement since it has the same concept as HTTP request-response connection. But there is an issue in terms of data availability, since the client will not be aware of new data until the next synchronization event. Several researches have already implemented push synchronization to overcome mentioned problems in pull synchronization such as in [4], [5], and [6]. As a result, the mobile applications automatically update their local data whenever new data arrive in server-side database. Unfortunately, the implementations of push synchronization in those researches demand on changing or even replacing the current system. On the other hand, management of companies and organizations will hesitate to change or replace their current system with the new one, if the system is not yet successfully proven. The background of this paper is regarding to the problems in tightly coupled push synchronization. Known implementations of push synchronization is not compatible with other system, therefore the adoption of that push synchronization feature in existing system will require many works and changing programs. The framework proposed in this paper utilizes several open technologies and techniques to ensure the loose coupling nature of that technologies. As a result, by using this framework, the developer can easily implement the push synchronization feature in existing system without even rewriting a single code. II.

SYNCHRONIZATION IN MOBILE ENVIRONMENT

Mobile database is another form of distributed database. Thus, synchronization process to keep local data identical with server data is inevitable. Synchronization also ensures data retrieved in local database to be the latest data taken from server-side database [7]. In every situation, data synchronization is vital for interactive networked applications because it is about exchanging data and resolving conflicts as they occur. Furthermore, Shah in [8] explains the objectives of using synchronization in distributed computing environment as below:

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

1. Keep the data up-to-date both in client and server side: Synchronization ensures data held in both server-side and client-side database are identical. If there are any changes in one side, synchronization is responsible to notify and change data in the other side. 2. Reduce network flow: Local/client-side application need only access its local database to do the work. Accessing server-side database is only done in synchronization process so it will save network bandwidth. 3. Faster response time: It is obvious that accessing local data is faster than accessing networked data, moreover for mobile devices that have limitation in terms of network communication. 4. Reliable data: Since data is kept up-to-date by synchronization, mobile application is more confident to deliver latest data as a result of last successful synchronization process. 5. Managing time conflict between nodes: In distributed system environment, managing time across several nodes is important to avoid conflict arising due to differences in time. Synchronization can check different timestamps between client and server and at once unify it to resolve time conflict. 6. Ensuring Quality of Service (QoS) in application: With all previous advantages, synchronization ensures QoS of delivered client application and make sure the result of working by using mobile application and other application will be the same. Mobile environment has unique characteristic compared to desktop devices such as higher chance of disconnection, smaller bandwidth, and storage size. For that reason it is important to adopt different synchronization techniques to be used in replicated mobile databases [9]. Some considerations while designing synchronization for mobile database is described in[8]as follows: 1. Robust protocol: Mobile devices use wireless network to communicate with server side database. As commonly known, wireless network have high network latency which slows down packet transfer. Having robust synchronization protocol is very important to solve that latency problem. 2. Support packet compression: One of the limitations of mobile devices is its bandwidth. Keeping transferred packets in small sizes even for big information is preferable. One of the encoding techniques to save transferred packet is WBXML (WAP Binary XML) [10]. 3. Fewer number of request-response: Packet cost in mobile environment is also relatively high compared to desktop environment. An optimal protocol would generate a single request-response pair of messages, but will contain all updates to local data. 4. Survive low reliability connectivity: Connection in mobile environment is likely to be unpredictable. Good synchronization technique has to ensure that the device and the networked data repository stay consistent and continue synchronization process when

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

the connection is reestablished. 5. In terms of various types of mobile platform, mobile synchronization protocol should be able to adapt any platform, both in server-side or client-side. The protocol should be able to synchronize any mobile platform using any networked data. Thus the developer will not need to find another synchronization technique for different mobile device or operating system [8]. III.

OPEN SYNCHRONIZATION USING SYNCML

SyncML is common synchronization protocol, which means this protocol is not dependent on any platform or even programming language. SyncML’s standard protocol is defined by Open Mobile Alliance to solve synchronization problem between different devices and application system. Generally, the most basic synchronization processes in SyncML are shown in Figure 1. The synchronization events in SyncML can be done two ways: (a) the mobile devices transmit synchronization message to the server, including most recent modifications, (b) server synchronizes server-side data using data transmitted from client and responds with SyncML message, including its own modifications.

Figure 1. Two synchronization ways in SyncML[11] There are three main synchronization processes in SyncML, which are Sending Anchor, ID Mapping, and Conflict Detection. Sending Anchor process is used to check if there is successful synchronization between particular client and server database. It is important to trace what data has already been synchronized in a mobile device. SyncML uses two anchors to navigate synchronization history, which are “lastanchor” and “nextanchor”. Last anchor is used to point the timestamp of last successful synchronization, whereas Next anchor is pointing to current synchronization. Anchor is sent in the first of synchronization process and server will determine if the synchronization will use slow synchronization, that is full synchronization or fast synchronization which means only recent changes will be transferred. If the Last anchor is different between clientside and server-side, then server will assume that either no previous successful synchronization exist or previous synchronization process is not successfully committed which potentially harm existing data consistency. Another main synchronization process is ID Mapping. It is important because every node generates its own ID in local database and also in server-side database. ID mapping is required to ensure both client and server point to the same data. SyncML use LUID (local unique ID) to identify ID that is used in local database and GUID

49

ISSN: 2088-6578

Yogyakarta, 12 July 2012

(global unique ID) to point the exact data on server-side database. Sample of ID mapping in synchronization event is as depicted in Figure 2. The last main synchronization process is conflict detection and resolution. The most critical phase in synchronization is conflict detection and resolution. This is the phase when SyncML ensures that data will be the same in both client and server sides. In mobile environment, conflict is inevitable since mobile wireless connection has high chances of disconnection. For that reason, the importance of conflict reconciliation which includes conflict detection and resolution is also higher in mobile application.

Figure 2. Mapping ID Example[11] The open nature of SyncML framework is one of important keys to ensure loose-coupled synchronization in mobile environment. SyncML utilizes XML as a container for synchronization data, and as commonly known XML can be parsed by many programming languages. IV.

XMPP FOR PUSH COMMUNICATION

XMPP (Extensible Messaging and Presence Protocol) is originally common messaging protocol that is used in instant messaging, such as GoogleTalk and Facebook Chat. Because of its openness, XMPP is also can be used for real time interactive system and complex cloud computing system [12]. XMPP has several advantages compared to other chat protocol. Saint-Andre in [13] explains the advantages of using XMPP for messaging protocol as below: 1. Proven: XMPP is used and tested over 10 years. It is mature enough to be used in real environment from small-scale systems to big-enterprise systems. 2. Secure: several aspects of secure communication as well as data transfer are provided in XMPP, for example channel encryption, strong authentication,

50

CITEE 2012

and inherent resistance to malware. 3. Decentralized: XMPP uses client-server architecture with an unlimited number of clients. XMPP server can be installed and connected using the Internet infrastructure standard. 4. Extensible: message/information that is sent by XMPP is in XML format and the usages are not restricted only to instant messaging, but also alert notification, syndication, VoIP, and many other areas as described by Hornsby and Walsh in [12]. 5. Scalable: HTTP-based polling application has problem on scalability regarding increasing number of user request for service. On the other hand, XMPP use push based connection that solves that problem. 6. Standard: XMPP’s core aspects has been approved and adapted by Internet Engineering Task Force (IETF). The protocols that provide basic functionality of XMPP are described in Request for Comments (RFC) 3920 through 3923. It also means that XMPP is open to anyone to add or implement the extension and licensing their products to open source or proprietary. 7. Community: XMPP is open to everyone to improve it. Word ”everyone” means there are end users that submit problems, developer that fix some bugs, and service provider that support its development. This last aspect enables XMPP to continuously be improved with some required features. XMPP uses XML to carry data and information about communication. Once a client connect to XMPP server, they will exchange three basic XML snippets called XMPP Stanzas[14], that are , , and . When an XMPP client connects to XMPP server, they will create socket connection to exchange XML streams. The client then can send unlimited number of these stanzas to server for sake of retrieving personal data, getting offline message, or sending message to other clients. This basic XMPP messaging enables one client pushing data to the others. Thus, in XMPP server point of view server-side synchronization engine is treated as one of XMPP clients that can push synchronization data to other client. V.

FRAMEWORK COMPONENTS

There are four major components which are used this synchronization frameworks. They are middleware layer, XMPP server, synchronization tables in database server, and mobile application. The upper layer of the synchronization component shown in is current serverside application that already exists. The communication component consists of XMPP server and XMMP client library. OpenFire is used as XMPP server implementation to manage publish subscribe messaging service, whereas SMACK is XMPP client library implemented in middleware layer and JXA is J2ME based XMPP client utilized in mobile application.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Figure 3. Component Diagram of the Framework To store every change that is made in the database and at once keep track of synchronization, a synchronization table is needed both in server and client. Figure 4 displays three additional tables which are utilized for synchronization. The sync table holds logs or histories of changes that happen in database. Sync anchor keeps synchronization anchor for each mobile client and the last map id stores local id and global id of each records in tables. Only two tables (i.e. sync table and sync anchor) are used in server-side database, whereas in mobile database three tables are needed. Trigger is employed to auto insert data into these three tables, since it is automatically fired in a row-level change and in fact all mature database management system (DBMS) is provided with trigger features.

Figure 4. Synchronization Table's Schema Communication between mobile client-side application and server-side component are established through XMPP protocol. Thus XMPP server is importantly needed here. XMPP server manages communication between clients to the other. XMPP server is responsible to establish XML stream communication, administering communication, and managing user and roster. There are three elements in Middleware layer, which

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

are XMPP client, message translator, and file watcher. XMPP client has a central role of communication with XMPP server which also mediates communication to all mobile client application. Once XMPP client establishes stream socket communication with XMPP server, SMACK client library provides message listener and roster listener. Message listener is invoked whenever a new message is arrived (in this case, the message is then passed to message translator), while roster listener is invoked when there is roster item (a contact) that changes his status (to be online or offline, for example). Message translator is responsible to parse message passed from XMPP client, to check validity of the anchor and eventually submit query to be processed in database. Despite submitting query to server-side database, the message translator will only get query results from sync table, since its role is only for synchronization necessity. As a result the message translator produces SyncML synchronization message that is sent back to client via XMPP server. To continuously detect changes in synchronization table, middleware layer employs a file watcher. This component will do infinite loop to check any changes in certain file, which indicated the existence of new data in server-side database. Those functionalities can also be done by infinitely send select query to sync table, but that way is impractical since it can overload database server communication. So, the file watcher is designed to be independent of other middleware elements to avoid run out of network resources. As shown in , the framework is built externally from current system except for the triggers. However, since the tasks of the triggers are only writing the changes in respective table into synchronization table. That means the possibilities of interrupting database logics are quite small. The design of the framework allows the developer to implement it in any existing system without changing it.

51

ISSN: 2088-6578

Yogyakarta, 12 July 2012

VI.

CONCLUSION AND FUTURE WORKS

This paper presents one way to implement push synchronization, which is by utilizing XMPP server as transport medium between server and client. XMPP server is placed in the middle between client and server, then both client and middleware layer utilized a XMPP client library to communicate with XMPP Server. In XMPP server point of view, the middleware layer is also an XMPP client that can send message to the other client, thus the middleware layer can push new-data notification to mobile application through XMPP connection. This paper also shows by utilizing middleware layer as synchronization engine, and using trigger to automatically write changes in server-side database the synchronization features can be attached to existing system without even rewriting current application. Middleware layer and additional table are external components that extend the functionality of existing application without directly affect the system. The possible future works of this paper are local database encryption and message compression. These future works are intended to overcome limitations of mobile environment in term of security and network capability. Local database encryption is required to prevent unauthorized person to see the local data even by directly accessing the database file, whereas message compression such as GZIP and HPack is important in mobile environment communication that have limitations in network, so that it can save bandwidth as much as possible. REFERENCES [1]

52

M.-Y. Choi, E.-A. Cho, D.-H. Park, C.-J. Moon and D.-K. Baik, "A database synchronization algorithm for mobile devices," IEEE Transactions on Consumer Electronics, pp. 392 -398, 2010.

CITEE 2012

[2]

P. Alahuhta, H. Helaakoski and A. Smirnov, "Adoption of Mobile Services in Business Case study of Mobile CRM," in IEEE International Conference on e-Business Engineering, Washington, DC, USA, 2005.

[3]

A. Trifonova and M. Ronchetti, "A General Architecture to Support Mobility in Learning," in IEEE International Conference on Advanced Learning Technologies, Washington, DC, USA, 2004.

[4]

D. Gomes, "XMPP based Context Management Architecture," in Social Networks, 2010.

[5]

M.-f. Horng and Y.-t. Chen, "A New Approach based on XMPP and OSGi Technology to Home Automation on Web," in Information Systems, 2010.

[6]

M. Pohja, "Server push with instant messaging," in 2009 ACM symposium on Applied Computing - SAC '09, 2009. W. Shun-yan, Z. Luo and Yong-liang, "A Data Synchronization Mechanism for Cache on Mobile Client," in Database, 2006.

[7] [8]

C. Shah, "Enhancing Sync4j : An Open Source SyncML Project," Leeds Metropolitan University, 2003.

[9]

M. Jin, X. Zhou, J. Zhou, X. Gao and C. Gong, "Strategy of Conflict Preprocessing and Reconciliation for Mobile Databases," pp. 1--6, 2008.

[10] B. Martin and B. Jano, "WAP Binary XML Content Format," [Online]. Available: http://www.w3.org/TR/wbxml/. [Accessed 27 December 2011]. [11] S. Fornari, Funambol Mobile Open Source, Packt Publishing, 2009. [12] A. Hornsby and R. Walsh, "From Instant Message to Cloud Computing, an XMPP review," in International Symposium on Consumer Electronics, 2010. [13] P. Saint-Andre, K. Smith and R. Troncon, XMPP : The Definitive Guide, United States of America: O'Reilly Media, Inc., 2009. [14] O. Ozturk, "Introduction to XMPP protocol and developing online collaboration applications using open source software and libraries," in 2010 International Symposium on Collaborative Technologies and Systems, 2010.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Feature Extraction on Offline Handwritten Signature using PCA and LDA for Verification System Fajrian Nur Adnan1; Erwin Hidayat1; Ika Novita Dewi1; Azah Kamilah Muda2 1

Faculty of Computer Science, Universitas Dian Nuswantoro Semarang, Central Java, Indonesia [email protected] research.dinus.ac.id; [email protected]; [email protected] 2

Center for Advanced Computing Technology, Faculty of Information and Communication Technology, Universiti Teknikal Malaysia Melaka, Malacca, Malaysia [email protected]

Abstract—Handwritten signature verification system is one of biometric alternative solution for conventional vulnerable verification system. Biometric verification system has more secured since the difficulties in forgery and burglary. Handwritten signature is one of the biometric identification that frequently used in many applications because of its convenient to use. Nevertheless, handwritten signature is a behavioral biometric that easily to be forged by others. Each handwritten signature has its own characteristics based on behavioral, keystroke, and personal psychological. This research aimed to applyPCA and PCA+LDA to extract the offline handwritten signature and used the extracted feature in verification system. This research also compare the false accept rate (FAR) and false reject rate (FRR) to determine the better technique for feature extraction in verification system. The result showed that extracting the offline handwritten signature by using hybridizing PCA and LDA is better than PCA because PCA+LDA has lower false accuracy rate (FAR) in verification system. Keywords-component; offline handwritten verification; PCA; LDA; feature extraction

I.

signature

INTRODUCTION

Verification system is one of the important parts in a secure system and resource access control. Biometric verification system presents in order to provide or replace the conventional verification system. Biometric verification system becomes a solution of personal identification system that hard to remember and easy to burglarized. Many applications have applied biometric verification system by using fingerprint, voice verification system, and iris verification system. Each person has a unique biometric characteristic that differentiate each other’s. Biometric identifications are used in security sector for assessing the verifications. The advantages in using biometrics in verification system are reducing the possibility to be forgotten, left, and forged. The used of biometric identification is more secure and efficient compared to the used of personal identification that hard to remember and possible to a forgery and the use of a card identification that possible to be left. Some of the biometric identifications are fingerprint, face, iris, voice, and handwritten signature; however handwritten signature is frequently used in a document verification system. Handwritten signature is behavioral biometric that used non biological feature as an input on a verification system [1].

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Handwritten signature verification system had widely used since its simplicity and easy to develop. Handwritten signature represents the legality of a document. In line with the development of security system that used handwritten signature in accessing the system, there exists a criminal action such as handwritten signature forgery in order to get a permission to access the system. This condition brings the disadvantage in using behavioral biometric compared to physiological biometric. Physiological biometric is hard to be forged since each individual has its own characteristics. Nevertheless, the use of behavioral biometric makes possible for others to have a forgery included handwritten signature forgery. In the fact, each handwritten signature is based on behavioral, keystroke, and personal psychological and these characteristics made it difficult to be imitated. The popular feature extraction techniques are PCA and LDA. The successes both of these techniques in extracting the feature for recognition process gives an idea to the author to implement it in verification system that will be authorized whether the signature is written by the original writer or it has been forged. The better extracted feature is the feature that produces the minimum false accepted rate and also false reject rate. II.

OFFLINE HANDWRITTEN SIGNATURE VERIFICATION

Each individual has its own biometric identifications that can be measured physically and based on its behavior [2]. Behavioral biometrics is able to be measured based on its behavior, such as handwritten signature, voice, gait, and attitude. Physiological biometrics is measured physically, such as face, retina and iris, fingerprint, and palm of the hand.

Write a signature in electronic tablet using stylus

(a)

Write a signature on a paper

Capturing

(b)

53

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Figure 1. a. online handwritten signature, b.offline handwritten signature

Based on the input type, the biometrics in handwritten signature is divided into online and offline recognition [3]. Online handwritten signature recognition extracted temporal information of a signature taken from a stylus and an electronic tablet that connected into a computer, such as writing speed, pressure points, and keystrokes [4]. Off-line handwritten are taken from a signature written on a paper, captured by using camera or scanner, and then processed in a digitized form. Off-line handwritten signature recognition is one of the important form of biometric identification that can be applied for various purposes [5]. Many systems have already used handwritten signature in order to confirm the document authenticity, such as in financial sphere and psychological analysis. Research on [4] identified five major steps in handwritten signature recognition. The first step is data acquisition, capturing the digital image of handwritten signature using camera or scanner. Signature preprocessing include normalized and resized the digital image of handwritten signature into proper dimensions, removing the background noise, or thinning the signature. Feature Extraction, using feature extraction algorithm in order to extract the useful information and reducing the worst data. Enrollment and Training, collecting several digital image of handwritten signature that will be train the extracted feature. Performance Evaluation, evaluate the accuracy or recognition rate of the system. III.

54

CITEE 2012

lower-dimensional representation from the subspace to denote the original data. Reducing the smallest PCs, it is not make loss the meaning full of information. The output components from this transformation are orthogonal, and the mean square error can be the smallest when describing the original vector with these output components.PCA is included on unsupervised learning algorithms, which need to discover the structure of the data sets without a preassigned class label. [12]. Computation of the principal components can be presented with the following algorithm [7], [13]: 1) Total Scatter Matrix Computation:from the input data. Some of paper use different formula in total matrix calculation. In general, scatter matrix define as Equation 1: ∑(

)(

)

(1)

where ∑

(2)

STis total scatter matrix, Xiis feature vector of matrix X, µ is average feature vector of matrix X, and n is total number of data in matrix X. 2) eigenvalues and eigenvectors computation:and sort them in a descending order with respect to eigenvalues such as Equation 3.

FEATURE EXTRACTION

(3)

Feature extraction is an important component in a pattern recognition system. In image recognition, this technique makes the classification process effective and efficient, and makes the recognition accurate and easy to implement [6]. Feature extraction is a dimensionality reduction technique that extracts a subset with low dimension, from the high dimension of original set using functional mapping, to reduce the input dimensions without a significant loss of information [7], [8]. The mapping or transformation of data might be a linear or nonlinear combination of the original variables.[9]. The linear feature extraction is considered a linear mapping of data from a high to a low-dimensional space, where class separability is approximately preserved [10]. The linear feature extraction removes redundant information and the result of classification more reliable in the reduced subspace. LDA and PCA are the two popular feature extraction methods which extract features by projecting the original parameter vectors into a new feature space through a linear transformation matrix.

where V is matrix of right eigenvector or eigenspace, which is corresponding to eigenvalue (). 3) Reduce dimensionality of data: by reduce the eigenvector (n dimension into k dimension, where k ≤ n). Subsets of the eigenvectors of ST are used as the basis vectors for a subspace in which to compare gallery and novel probe images. These basis vectors are the Principle Components (PCs) of the data. 4) Image projection:multiply the original feature space with the obtained transition matrix, which yields a lower- dimensional representation such in LDA in Equation 18 The PCA method tends to find a projection matrix Vopt, which maximize the determinant of the total scatter matrix of the projected samples [14] such in Equation 4

A. Principle Component Analysis (PCA) PCA was first introduced by Pearson in 1901, and it experienced several modifications until it was generalized by Loève in 1963. PCA also known as Eigenspace Projection, the singular value decomposition (SVD), or Karhunen and Leove (KL) transformation [11]. Principal Component Analysis (PCA) is a dimensionality reduction technique that used to transform the pattern into a lowerdimensional space using linear transformations, and use

B. Lnear Discriminant Analysis (LDA) Linear Discriminant Analysis (LDA) is a discriminant function analysis which is use linear transformation. Different from PCA, LDA is a classical statistical approach for supervised dimensionality reduction which each object in the data set comes with a pre-assigned class label [12]. LDA is also statistical approach for classification that enhances the class discriminatory

| |

| |

(4)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

information in the lower-dimensional space by linear transformation. LDA has a problem with singular data which is occurredwhen the data is in high-dimensional and sample size data is low[15]. This problem can be solved by apply an intermediate dimensionality reduction, such as PCA, to reduce the data dimensionality before LDA applied. This algorithm known as PCA+LDA, or subspace LDA when the t-dimension is transform by PCA which is keeps the top p eigenvalues of , or the most important information. After dimension is reduced, within class scatter matrix is singular, and LDA can be applied. The main idea of LDA method is to finds the optimal projection direction (transformation) , by maximizes the ratio of the between-class to the within-class scatter matrices of the projected samples [16], [17]. The maximize ratio achieve by separate class means of the projected directions well while achieving a small variance around these means, or maximizing the between-class distance simultaneously. The optimal transformation in LDA can be readily computed by applying an eigendecomposition on scatter matrices. | |

| |

(5)

The detail of algorithm in LDA is shown as bellow: 1) Within-class scatter matrix calculationwhich formulated such in Equation 6, ∑(

)(

)

(6)

where ∑

(7)

2) Between-class scatter matrix calculation, which counted such in Equation 8 ∑(

)(

)

(8)

SB is the between-class scatter matrix and SW is the within-class scatter matrix. The notation c is the total number of classes in whole dataset, p is number of population in class i, and n is number of image. Xiis the feature vector of a data, and µiis the mean feature vector of class i.

ISSN: 2088-6578

(10) P is projected data with lower (k) dimensional representation IV.

EXPERIMENTAL DESIGN AND RESULT

The offline handwritten signature verification is created and tested by using Matlab code. The whole processes of experiment included image acquisition, image preprocessing, feature extraction, threshold computation, and also testing. Each process will be described as below. A. Image Acquisition To do the experiment, digital image of offline handwritten signature are needed. The images can be obtained by capture handwritten signatures in paper using a scanner to get the digital image of handwritten signature. Another option is use the dataset which already provided from another research. In this experiment, researcher use the data which provided in [18]. Off-line handwritten signature obtained from 55 writers which each writer sign 24 times. Images provided in “PNG”, various sizes and color types. Number participant

: 55

Total images

: 1320

Format images

: PNG

Size image

: various

Color type

: various (RGB and grayscale)

A sample of offline handwritten signature is collected from each participant into train dataset. Inorder to test the performance of PCA-LDA, 5 sample of original and 5 sample of forgery signature of each signature is collected into test dataset. TABLE I. Writer 1stwriter 2ndwriter : : 55thwriter total

DETAIL CONTENT OF TEST DATASET Original 5 5 : : 5 275

Forged 5 5 : : 5 275

Total image 10 10 : : 10 550

3) Eigenvectors and eigenvalue calculation: the eigenvector and eigen of the projection matrixcalculated by using Equation 9 (9) 4) Reduce dimension of data: by reducing the eigenvector (n dimension into k dimension, where k ≤ n) 5) Data Projection: Multiply the original feature space with the obtained transition matrix, which yields a lower- dimensional representation.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 2. Original handwritten signature

55

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

3) Reshaping:image that will be extracted by using PCA LDA need to be converted into 1D. So that the reshaping process is needed to convert images from 2D into 1D. Figure 6 showed the simulation of reshaping process.

20 Figure 3. Forgery handwritten signature

12

Size image (12x20)

Size vector = 12x20 =240

Convert into vector

B. Preprocessing Preprocessing is a necessary process that bring the input data into an acceptable form in feature extraction [19]. Preprocessing stage that used in this experiment include size normalization, color normalization and reshaping. 1) Size Normalization:The main objective of resizing is to equalize the size of the image to be processed in step features extraction. The other objective resizing are to minimize computation and memory consumption when the image compute in small dimension. This experiment is conducted by using image with size 12x20 2) Colour Normalization:the aim of color normalization is to create an image with single scalar values for each pixel or take a single colour channel of image. To get single channel from different colour of light, the image need to convert from RGB into into grayscale. Convertion from RGB to grayschale denote as Equation 11 (

)

(11)

Matlab provides the function to convert RGB into grayscale. Matlab used coefficient α = 0.2989, β = 0.5870,  = 0.1140 to convert RGB into grayscale. Instantly, Matlab is able to be used for the process of converting RGB image into Grayscale image. The resulted Grayscale image is not quite different from the original image since the original color of RGB image is in white and gray color. The point that differentiates both of these images is the color value. In original image contain of three values in R, G, and B such represented in figure 4. In the Output image, still with white and gray image, is only contain of single value such represented in figure 5

Figure 4. Properties of RGB image

Convert into vector

550 image (12x20)

Matrix with 550 column vector (size 240) Size matrix (240x550) Figure 6. Reshaping illustration

C. Feature Extraction In order to extract the offline handwritten signature feature, this experiment is divided based on the feature extraction techniques include PCA and hybridizing PCA and LDA (PCA+LDA). Each technique will extract the data in train dataset into subspace or projected image that has lower dimension. The output of projected images is a vector with 40 dimensions. Besides projected image, feature extraction also produces the parameter extraction. These parameter extractions will be used to extract the offline handwritten signature in test dataset. Feature extraction by using PCA produce one parameter extraction, while feature extraction by using PCA+LDA will produce two parameter extractions. D. Threshold Computation Threshold is the important part in verification system. Threshold used as a boundary in order to determine whether the signature is signed by the original person or not. Euclidean distance is used to determine threshold. Distance between the transformed image and the original image in train data set will be used as threshold of each signature. The Euclidean distance calculation is formulated as: (

Figure 5. Properties of grayscale image

)





(12)

where ‖ ‖

56





(13)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

(

) (

(√( ) (

) (

))

(14)

) (

)

(15)

E. Testing Testing is conduced 550 times, when each class of signature is containing 5 original signatures which is written by original writer and 5 forgery signatures that written by the other. Total classes are 55 which represent 55 types of signatures.Testing is conducted in order to get the error rate of offline handwritten signature verification that has been extracted by using LDA. The performance measurement will be divided into two parts include false accept rate and false reject rate. False accept rate or false match rate (FAR or FMR) is the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs which are incorrectly accepted. Testing was said as FAR when the distance between forged signature and projected image that has been tested is lower than threshold. False reject rate or false non-match rate (FRR or FNMR): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs which are incorrectly rejected. Testing was said as FRR when the distance between original signature and projected image that has been tested is higher than threshold In the other side, testing is correct when the distance between original signature and projected image that has been tested is lower than threshold, or the distance between forged signature and projected image that has been tested is higher than threshold. The results of the experiment are presented in Table II TABLE II.

ACCURACY RATE

Measurement FAR FRR Total False Rate

PCA 21.27 % 0 21.27 %

PCA+LDA 20.90 % 0 20.90 %

obtained by giving more additional the population of each signature in train dataset REFERENCES [1]

[2] [3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11] [12] [13]

[14]

[15]

V.

CONCLUSION

This paper presents the feature extraction on Offline handwritten signature using PCA and LDA for verification system. The experiment was conducted by extract the offline handwritten signature feature using PCA and PCA+LDA, and use the extracted feature in verification system. The experiment was successfully conducted, and the result showed that extracted feature by using PCA+LDA is better than PCA in extracting feature and determine the threshold. This condition affected on lower number in false accepted rate or total false rate. VI.

ISSN: 2088-6578

[16] [17]

[18] [19]

E. Maiorana, P. Campisi, A. Neri, and D. E. Applicata, “Bioconvolving : Cancelable Templates for a Multi-Biometrics Signature Recognition System.” IEEE, 2011. D. Zhang, X. Jing, and J. Yang, Biometric Image Discrimination Technologies. Harshey: Idea Group Inc., 2006. S. Emerich, E. Lupu, and C. Rusu, “On-line Signature Recognition Approach Based on Wavelets and Support Vector Machines.” H. B. Kekre and V. a Bharadi, “Off-Line Signature Recognition Systems,” International Journal of Computer Applications, vol. 1, no. 27, pp. 61-70, Feb. 2010. G. Agam and S. Suresh, “Warping-Based Offline Signature Recognition,” IEEE Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 430-437, Sep. 2007. V. Nguyen, Michael Blumenstein, and G. Leedham, “Global Features for the Off-Line Signature Verification Problem,” in International Conference on Document Analysis and Recognition, 2009. A. Tsymbal, S. Puuronen, M. Pechenizkiy, M. Baumgarten, and D. Patterson, “Eigenvector-based Feature Extraction for Classification,” Artificial Intelligence. pp. 1-5, 2002. X. Zhang, R. Li, and L. Jiao, “Feature Extraction Combining PCA and Immune Clonal Selection for Hyperspectral Remote Sensing Image Classification,” in 2009 International Conference on Artificial Intelligence and Computational Intelligence, 2009, no. 2. L. Zhang, Y. Zhong, B. Huang, J. Gong, and P. Li, “Dimensionality Reduction Based on Clonal Selection for Hyperspectral Imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 12, pp. 4172-4186, Dec. 2007. P.-F. Hsieh, D.-S. Wang, and C.-W. Hsu, “A linear feature extraction for multiclass classification problems based on class mean and covariance discriminant information.,” IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 2, pp. 223-35, Feb. 2006. I. K. Fodor, “A Survey of Dimension Reduction Techniques.” 2002. K. Fukunaga, Introduction to Statistical Pattern Recognition, Second. Boston: Harcourt Brace Jovanovich, 1990. M. S. Ahuja and S. Chhabra, “Effect of Distance Measure in PCA Based Face Recognition,” International Journal of Enterprise Computing and Business Systems, vol. 1, no. 2, 2011. F. Ye, Z. Shi, and Z. Shi, “A Comparative Study of PCA, LDA and Kernel LDA for Image Classification,” 2009 International Symposium on Ubiquitous Virtual Reality, pp. 51-54, Jul. 2009. J. Ye and S. Ji, Biometrics: Theory, Methods, and Applications. Hoboken, New Jersey: JohnWiley & Sons, Inc., 2010. S. Chowdhury, “A Hybrid Approach to Facial Feature Extraction and Dimension Reduction,” Science, pp. 11-16, 2011. R. Jafri and H. R. Arabnia, “A Survey of Face Recognition Techniques,” Journal of Information Processing Systems, vol. 5, no. 2, pp. 41-68, Jun. 2009. “Forensic/Questioned Document Examination,” 2011. [Online]. Available: http://www.cedar.buffalo.edu/NIJ/publications.html. S. V. Rajashekararadhya and D. P. V. Ranjan, “EFFICIENT ZONE BASED FEATURE EXTRATION ALGORITHM FOR HANDWRITTEN NUMERAL RECOGNITION OF FOUR POPULAR SOUTH INDIAN,” Journal of Theoretical and Applied Information Technology, 2008.

FEATURE WORK

The further research are required in term of minimizing the verification rate especially in false accept rate. Minimizing false accepted rate is possible to

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

57

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Cognitive Agent Based Modeling of a Question Answering System 1

Eka Karyawati, Azhari SN

Department of Computer Science and Electronics, Mathematics and Natural Science Faculty, Gadjah Mada University Yogyakarta, Indonesia 1

[email protected]

Abstract— Cognitive agent or often called rational agent is one of the application aspects of cognitive computing. Cognitive agent demonstrates how machine and computational intelligence may be generated and implemented by cognitive computing theories and technologies toward autonomous knowledge processing. One of the widely known architectures for designing and implementing cognitive agents is the belief-desire-intention (BDI) architecture. This paper presents application of cognitive agent in question answering. We use Jadex framework to implement the BDI agent concepts. Beliefs, goals, and plans are three main components of human practical reasoning adopted by cognitive agent, especially in Jadex architecture. Goals make up the agent's motivational stance and are the driving forces for its actions. Therefore, the representation and handling of goals is one of the main features of Jadex. The experiments are done to show how cognitive agent concepts can be applied in question answering domain. Keywords-cognitive agent, BDI architecture, questionanswering

I.

INTRODUCTION

An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives. The difference between an agent and an intelligent agent is that an intelligent agent required to be reactive, proactive and social. Furthermore, an intelligent agent that has cognitive abilities such as, reasoning, planning, scheduling ability, is named cognitive agent [6]. Cognitive agent or often called rational agent is one of the application aspects of cognitive computing. Cognitive agent demonstrates how machine and computational intelligence may be generated and implemented by cognitive computing theories and technologies toward autonomous knowledge processing [1]. One of the widely known architectures for designing and implementing cognitive agents is the belief-desireintention (BDI) architecture, following a model. BDI architecture consists of beliefs, desires and intentions as mental attitudes that deliberate human action [2]. There was some researches that utilized BDI architecture to implement cognitive agents concepts ([4], [5]). Pereira used BDI architecture to model a financial market [4]. The system applied a type of BDI agents, which are deliberative agents with a mental state defined by a belief base and by a set of desire-generation rules. Beliefs are graded and trust in information sources is

58

taken into account. At any moment, an agent generates a set of desires and selects a consistent subset thereof, whose elements are adopted as goals, to be achieved by executing actions. A Two-Level BDI-Agent Model was proposed by Bosse [5]. The model used BDI-concepts to describe the reasoning process of an agent that reasons about the reasoning process of another agent, which is based on BDI-concepts. The research studied how the model can be used for social manipulation. We apply the BDI agent concepts in question answering (QA) domain. Question answering is a technology that takes text retrieval beyond search engines by pinpointing answers instead of delivering ranked lists of documents. Much of the effort lies in answering whquestions, i.e. questions beginning with who, what, where, why, which, when, and how, and extracting single facts, lists of facts, or definitions from large corpora of text documents [7]. In this research the question answering system is modeled in different viewpoint. We study to get some pair question-answers from a given text. We design a model using BDI agent to model Questioner agent and Answering agent. Questioner agent try to understand a given text and then ask some questions relating to the text. On the other hand, answering agent try to understand the given text to answer the questions asked by the questioner agent. This system is interesting and useful, for instance in education domain. It can automatically generate questionanswer pairs from a given text or document. It is helpful for academic purpose, e.g., for constructing problems and their answers. We utilize surface (lexico-syntactic patterns) and partof-speech (POS) patterns to find questions from a text and to extract the answer. There are some researchers that proposed or studied about patterns based approach for answer extraction ( [7], [8], [9], [10]). The rest of the paper is organized as follows. BDI architecture is given in Section 2. QA System and Pattern Matching Based Approach is presented in Section 3. Section 4 presents Implementation of BDI agent in QA domain. Discussions about the implementation and the results are presented in Section 5. Finally, conclusions are given in Section 6.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

II.

ISSN: 2088-6578

BDI ARCHITECTURE

One of the most interesting and popular agent theories for designing and implementing cognitive agents is the belief-desire-intention (BDI) architecture. According to the philosophical viewpoint, the internal states and the decision process of a BDI agent are modeled in terms of mental states, such as beliefs, desires and intentions, which respectively represent the information, motivational, and deliberative attitudes of the agent [3]. The BDI model derives from the philosophical tradition of human practical reasoning. It states that humans decide, moment by moment, which actions to perform in order to pursue their goals. Practical reasoning involves two processes: (1) deliberation, to decide what states of affairs to achieve; and (2) means-ends reasoning, to decide how to achieve these states of affairs. Bratman’s theory stresses the role of intentions in human reasoning, as it states that intentions are important because they affect the selection of next actions to be executed [3]. There are some frameworks based on BDI architecture that have been proposed ([2], [3], [11]). [2] and [3] utilize Jade multi agent framework to realize BDI agent concepts. Beliefs, desires, and intensions are define and design based on Jade framework. Braubach & Pokahr proposed Jadex framework to implement BDI architecture in designing agent based system [2]. In Jadex, agents have beliefs, which can be any kind of Java object and are stored in a beliefbase. Goals represent the concrete motivations (e.g. states to be achieved) that influence an agent's behavior. To achieve its goals the agent executes plans, which are procedural recipes coded in Java. The abstract architecture of a Jadex agent is depicted in the following Fig. 1. In this research, we use Jadex framework in our implementation of BDI agent of question answering system. Fig. 1 presents an overview of the abstract Jadex architecture. An agent is a black box, which receives and sends messages. As common in PRS-like systems, all kinds of events, such as incoming messages or goal events serve as input to the internal reaction and deliberation mechanism, which dispatches the events to plans selected from the plan library. In Jadex, the reaction and deliberation mechanism is the only global component of an agent. All other components are grouped into reusable modules called capabilities. Beliefs. In Jadex, an object-oriented representation of beliefs is employed, where arbitrary objects can be stored as named facts (called beliefs) or named sets of facts (called belief sets). Operations against the beliefbase can be issued in a descriptive set-oriented query language. Moreover, the beliefbase is not only a passive data store, but takes an active part in the agent’s execution, by monitoring belief state conditions. Changes of beliefs may therefore directly lead to actions such as events being generated or goals being created or dropped.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 1. Overview of BDI Architecture

Goals. Goals are a central concept in Jadex, following the general idea that goals are concrete, momentary desires of an agent. For any goal it has, an agent will more or less directly engage into suitable actions, until it considers the goal as being reached, unreachable, or not wanted any more. Application specific goal deliberation settings specify dependencies between goals, and are used for managing the state transitions of all adopted goals (i.e. deciding which goals are active and which are just options). In addition, some goals may only be valid in specific contexts determined by the agent’s beliefs. When the context of a goal is invalid, it will be suspended until the context is valid again. Plans. Plans represent the behavioral elements of an agent and are composed of a head and a body part. The plan head specifies the circumstances under which a plan may be selected, and preconditions for the execution of the plan. Additionally, in the plan head a context condition can be stated that must be true for the plan to continue executing. The plan body provides a predefined course of action, given in a procedural language. This course of action is to be executed by the agent, when the plan is selected for execution. III.

QUESTION ANSWERING SYSTEM AND PATTERN MATCHING BASED APPROACH

A. Question Answering System Question Answering (QA) is a fast-growing research area that brings together research from Information Retrieval, Information Extraction and Natural Language Processing. It is not only an interesting and challenging application, but also the techniques and methods developed from question answering inspire new ideas in many closely related areas such as document retrieval, time and named-entity expression recognition, etc.[10] The first type of questions that research focused on was factoid questions [10]. For example, “When was X born?”, “In what year did Y take place?”. The recent research trend is shifting toward more complex types of questions such as definitional questions (biographical questions such as “Who is Hilary Clinton?”, and entity definition questions such as “What is DNA?”), list questions (e.g. “List the countries that have won the World Cup”), and why-type questions.

59

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Question answering system is a new generation of search engines [12]. The key QA system is to carry out another extraction from the recalled pages again which based on the search engines, to identify with the contents directly related to the question and return to the users. In this paper we focus on design a question answering system from different viewpoint. We define question system as a system that can generate some questions from a specific text (a text-file input). It is more like inverse of question answering system. In QA system, system will generate answer or candidate answers from a specific document, but here, system will generate questions from a specific document. Answering system is defined as a system that give an answer to a given question (a question input) where the answer is got from a specific text (a text-file input). Consequently, there are two inputs for answering system, a question and a text-file or document. For both system, question system we use pattern matching based approach, lexico-syntactic pattern (by utilizing named-entity (NE) pattern) and POS pattern. For instance, to generate what/who-question we do the following steps. First, the text-file input is segmented to some sentences. Then, each sentence is tagged using partof-speech (POS) tagger. A sequence of tags of one sentence is a POS pattern of the sentence. This sentence pattern is matched with a specific pattern to check whether the system can generate a question from the sentence or not. The last step is generate a possible questions from each of sentences contained in the text-file input based on set of patterns. In addition, to determine whether the question is what or who question, we use named-entity recognizer (NER). A sentence potentially can generate who question if that sentence contains PERSON tag. On the other hand, a what question can be generated from a sentence that contains LOCATION, ORGANIZATION, and so on. Furthermore, the pattern matching based approach, especially POS pattern can be used to answer a given question, where the input are a question and a text-file. Similar to question system, first, each sentence of text-file is tagged using POS tagger, and generate the sentence patterns. The question also is tagged using POS tagger, generate the question pattern. Then, we match the question and each sentences of the text-file to generate candidate answers. The last step is analyze the candidate answers based on set of patterns, to choose an answer. B. Pattern Matching Based Approach The pattern matching based approach utilizes several types of units of sentence, such as punctuation marks, capitalization patterns, plain words, lemmas, part-ofspeech (POS) tags, tags describing syntactic function, named entities (NEs) and temporal and numeric expressions [7]. This approach can be used to generate questions from a text or to extract the answers. Different types of answer extraction patterns have been proposed, from the most simple ones containing plain words and punctuation marks and proceed on to more complex ones that may contain NEs and syntactic

60

CITEE 2012

functions. In our model of QA system, we use lexicosyntactic patterns and part of speech (POS) patterns for pattern matching. They are lexico-syntactic patterns that are especially useful for handling frequent question types such as Who is . . . , Where is . . . , What is the capital of . . . , When was . . . born? [7]. To understand more clearly about the lexicosyntactic patterns, let consider the examples below, Q: When was NAME born? P1: NAME was born in < a >YEAR< /a > S1: that Gandhi was born in 1869 on a day which . . . P2: NAME (< a >YEAR< /a >-YEAR) S2: In India, Gandhi (1869-1948), was able to . . . P1 and P2 are the patterns and S1 and S2 are the snippets where the patterns contained. The capitalized words in the question and the patterns mean NEs or expressions. The patterns work so that when a new question such as When was Gandhi born? comes into the system, it is first analyzed and the string recognized as a NE of type NAME (i.e. Gandhi) is inserted into the patterns. Then the patterns are matched against the text snippets. The words inside the < a > tags show where the answer is found. In the examples above, the answer has to be recognized by the expression analyzer as being an expression of type YEAR. POS pattern based approach utilizes part-of-speech components of a sentence. To illustrate this approach, let consider the examples below, P1: NNP* VBG* JJ* NN+ NNP+ S1: ABC/NN spokesman/NN Tom/NNP Mackin/NNP P2: NNP+ , DT* JJ* NN+ IN* NNP* NN* IN* DT* NNP* NN* IN* NN*NNP* , S2: George/NNP McPeck/NNP, an/DT engineer/NN from/IN Peru/NN, P1 and P2 for handling questions beginning by Who is along with example text snippets, S1 and S2 that match the patterns. The syntax of the above POS patterns is that of regular expressions, i.e. the symbol + means at least one and the symbol * means none or any number of occurrences. The POS abbreviations such as NNP and VBG, come from the POS tagger, which uses the tag set of the Penn Treebank from LDC. The meanings of the tags used in the example patterns above are as follows: • • • • • •

DT determiner IN preposition or subordinating conjunction JJ adjective NN noun, singular or mass NNP proper noun, singular VBG verb, gerund or present participle

For generating question from a sentence, the set of regular expression can be matched to the sentence until a match is found. If the sentence matches against regular expression then the associated question extraction pattern can be applied to generate a question. On the other hand, for answer extraction process, the sets of regular expressions can be applied to the question,

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

in no particular order, until a match is found or until all the sets have been exhausted. If the question matches against a regular expression then the associated answer extraction pattern is passed to the text for locating possible answers [9]. IV. IMPLEMENTATION OF BDI AGENT IN QA DOMAIN We design the question answering agent using Jadex framework. We construct two main agent. The first agent is Questioner agent which has a goal to generate some questions from a text input. The second agent is Answering agent which has goal to answer a question. Based on the BDI model, we should define the beliefs, desires, and intentions of each agent. We define the beliefs of Questioner agent as a set of patterns which will be used by Questioner agent to check whether the agent can extract a question from set of sentences (text-file). In this research, we restrict the kind of question only for what and who question. Due to the purpose of this research is stressed for cognitive agent application, not for generating or answering question method. To extract question from a sentence, we assume that a sentence can generate a question (who/what question) if it contains noun/proper noun (in POS pattern, called NN/NNP). Consequently, we set a belief of Questioner agent is he will generate a question if a sentence contains NN/NNP tag. Other beliefs of Questioner agent is beliefs to decide whether from a sentence, agent can generate what question or who question. In our approach, who question can be generated from a sentence, if the topic of the sentence is PERSON (we can see from named-entity (NE) pattern of the sentence). On the other hand, what question can be generated, if a sentence contains LOCATION, ORGANIZATION, etc. To extract a question (segment) from a sentence, we define set of patterns (POS patterns) as other beliefs of Questioner agent. We define beliefs of Answering agent as beliefs to decide whether a question has an answer from a given input text or to generate candidate answers. We assume that a question has an answer from a given text, if their keywords are matching each other (question and sentences of the text). Other beliefs of Answering agent is belief for reasoning about how to extract the answer. In this design, set of lexico-syntactic patterns that can be used to extract answer for what/who - question is defined as beliefs of Answering agent. Besides that, an important belief of Answering agent is belief to check whether an input is who/what -question. For instance, regular expression "(What|what).*" and "(Who|who).*can be used to check. If not matching, agent will not extract an answer. Based on Jadex framework, beliefs of agent are define as the container for the facts known by the agent (i.e beliefbase). Beliefs are usually defined in the ADF and accessed and modified from plans. To define a single valued belief or a multi-valued belief set in the ADF we has to use the corresponding or tags

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

and has to provide a name and a class. The name is used to refer to the fact(s) contained in the belief. In this QA cases we use patterns of string as facts, for instance pattern of regular expression ".*(NN|NNP).*" is a fact of agent's belief that who/what question can be generated from a specific sentence if that pattern is matched into the sentence. Desires concept is close related to goals. Jadex does not differ desires and goals. For example, as we explained before, desire or goal of Questioner agent is to generate a set of questions from an input text document. While, goal of Answering agent is to answer a question based on a specific input text. Jadex provides an architectural framework for deciding how goals interact and how an agent can autonomously decide which goals to pursue. This process is called goal deliberation. For our case, we set that Questioner agent will generate questions after he understand the given input text. To understand a given text, Questioner agent has to recognize the sentences patterns contained in the text, NE or POS pattern. As a result, the agent must do NE recognition and POS identification, before he can generate questions. Therefore, the goal deliberation of this QA system is agent will perform actions to achieve PerfomRecognizeNE goal before perform actions to achieve PerformGenerateQuestions goal. Moreover, agent will perform actions to achieve PerformIdentifyPOS goal before perform actions to achieve generateQuestion goal. In other words, PerformRecognizeNE goal inhibits PerformGenerateQuestion goal and also PerformIdentifyPOS goal inhibits PerformGenerateQuestions goal. Similar goal deliberation is applied to Answering agent. In the last, we define intension of Questioner and Answering agent. Intentions of an agent represent what desire can be realized and what kind of action can be performed to achieve that desires/goals. This set of actions is called as plan. Plans represent the agent's means to act in its environment. Therefore, the plans predefined by the developer compose the library of (more or less complex) actions the agent can perform. Depending on the current situation, plans are selected in response to events or goals. In Jadex, plans consist of two parts: A plan head and a corresponding plan body. The plan head is declared the the ADF whereas the plan body is realized in a concrete Java class. Therefore the plan head defines the circumstances under which the plan body is instantiated and executed. The complete definition of an agent is captured in a so called agent definition file (ADF). The ADF is an XML file, which contains all relevant properties/components of an agent (e.g. the beliefs, goals and plans). For more clearly understand on how to define an agent, let see Fig. 2. Fig. 2 shows ADF of Questioner agent. There are three main components of BDI agent based on Jadex framework: beliefs, goals, and plans.

61

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Questioner Agent. Using the beliefbase of String patterns for checking the sentences.

Explanation

The agent generate sentences with NER and POS Tagging some questions from input text. --> java.util.logging.* java.util.* jadex.adapter.fipa.* "test.txt"

".*(NN|NNP).*" "(DT)*(NN)*(IN)*(NNP)+(NN)*,(DT)*(JJ)*((NNS)|(NN))+ (IN)*(NNP)*(NN)*(IN)*(DT)*(NNP)*(NN)*(IN)*(NN)*(NNP)*,.*" "(DT)*(NNP)*(VBG)*(JJ)*(NN)+,(NNP)+,.*" "O*(PERSON)+.*" "O*(ORGANIZATION)+.*" "O*(LOCATION)+.*"
Figure ADF of Questioner 62

CITEE 2012

Example of Questioner agent recognizing named entities of a method for identifying part of (POSTagPlan), and method for (GenerateQuestionsPlan).

plans are method for sentence (NERPlan), speech of sentence generating questions

Similar to Questioner agent, Answering agent also has plans as actions to achieve his goal to understand a given text and then answering a question based on the text. However, in this design of system, Answering agent does not need to recognize NE, like Questioner agent. Consequently, the plans include method for identifying part of speech and method for extracting an answer for a question. Example of plans based on Jadex framework in java class can be seen in Fig. 3. Class of plans are inherited from class Plan. package questioner; import java.util.StringTokenizer; import jadex.runtime.*; import java.util.regex.Matcher; import java.util.regex.Pattern; import javax.swing.JOptionPane; import java.util.StringTokenizer;

public class GenerateQuestionsPlan extends Plan{ public GenerateQuestionsPlan() { getLogger().info("Created:"+this); } public void body() { String input = (String)getBeliefbase(). getBelief("inputFile").getFact(); String preposP = (String)getBeliefbase().getBelief("prePOSPattern").getFact(); String[] posP = (String[])getBeliefbase().getBeliefSet("POSPattern").getFacts(); String[] neP = (String[])getBeliefbase().getBeliefSet("NEPattern").getFacts(); preCondition(input,preposP); generateQuestion(neP,posP); getLogger().info("There are "+num+" sentences contained in the text:"); for (int i=0;i
DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

between agents yet. Actually this is very useful component of agent that can be applied to get more powerful system. For instance, we can design a communication between Questioner agent and Answering agent. Questioner agent ask a question (based on a given input text) to Answering agent, and Answering agent give an answer (based on the text). This mechanism can effectively used to generate question-answer pair from a text. Otherwise, it is also useful to evaluate the effectiveness of generate question and answer method (i.e. this research use pattern matching based method for generating question and answering question). VI. CONCLUSION

Figure 4. Cognitive Agent in QA Domain

The illustration of the cognitive agent based QA system is presented in Fig. 4. The agent in the left hand side is Questioner agent, he tries to understand a given text, by analyzing the text sentence by sentence. Checking the sentences whether he can generate questions from that sentences. The agent in the right hand side is Answering agent, similar to Questioner agent, he also tries to understand a given text by analyzing sentence by sentence. Moreover, he also tries to understand a given question and checking whether he can extract an answer of the question based on the text. This paper present a system design of question answering based on BDI agent by utilizing only three components: beliefs, goals, and plans. Actually, we can utilize others component of Jadex such as events, capability, and communication agent. We just start learning BDI agent. As a result, simple implementation can be presented, but it is not useless. We can show how to implement BDI concepts in QA system. If we can use other components, it will make the system more powerful. V. DISCUSSIONS Cognitive agent based modeling is an interesting topic. We can applied this concept, especially for simulation purpose. In our implementation, we start from using trigger of agent's plan by message events. Request message is used to ask agent to do certain action. For instance, request_generateQuestions is applied to ask agent to generate questions from a given text. However, a cognitive agent should automatically and autonomously decide which plans he should choose based on his reasoning ability. Reasoning aspect of a cognitive agent can be applied , especially in Jadex, by goal deliberation process. As a result, the triggering of agent's plans is not only from external message, but also from goal deliberation process. Other important component of cognitive agents are their abilities to communicate each others. In our design of QA system, we do not include the communication

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

There are a lot of experiments that can we do using cognitive agent model. We just start study on cognitive model. Consequently, we do not have enough experiences in cognitive agent experiments. This cognitive agent concept is very interesting. It tries to model based on human thinking process or human reasoning process for solving a problem or for achieving a goal. Beliefs, goals, and plans are three main components of human practical reasoning adopted by cognitive agent, especially in Jadex architecture. Goals make up the agent's motivational stance and are the driving forces for its actions. Therefore, the representation and handling of goals is one of the main features of Jadex. Question answering system, the challenging technology information processing, can be design using cognitive agent based approach. We can flexible choose the scenario of system, depend on the purpose of study. Moreover, many features or tools of agent's framework (e.g. Jadex) can utilize to make system more powerful. In future works, we will more focus in goal deliberation processes and communication between agents for designing question answering system. We will also investigate for different approaches in generating and answering questions. Furthermore, variation approach of the methods can be evaluated by performing simulation based on cognitive agent model. REFERENCES [1]

[2] [3]

[4]

[5]

Wang, Y., 2009, On cognitive computing, International Journal of Software Science and Computational Intelligence, vol. 1, no. 3. pp. 1-15. Braubach, L. & Pokahr, A., 2010, JADEX User Guide, University of Hamburg, Germany Morreale, V., Bonura, S., Francaviglia, G., & Centineo, F., 2006, Goal-Oriented Development of BDI Agents: the PRACTIONIST Approach, In Proceeding of International Conference on Intelligent Agent Technology, Hongkong, December, 18-22. Pereira,C.C., Mauri, A., & Tettamanzi, A.G.B., 2009, CognitiveAgent-Based Modeling of a Financial Market, In Proceeding of the International Joint Conference on Web Intelligence and Intelligent Agent Technology, Milano, Italy, September, 15-18. Bosse, T., Memon, Z. A., & Treur, J., 2008, A Two-Level BDIAgent Model for Theory of Mind and its Use in Social Manipulation, In Proceeding of the Eropean Conference of Ambient Intelligence (AmI 2008), Nuremberg, Germany, November, 19-22.

63

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

[6]

Padgham, L. & Winikoff, M., 2004, Developing Intelligent Agent Systems - A practical guide, Lin Padgham & Michael Winikoff, John Wiley & Sons, RMIT University, Melbourne, Australia [7] Aunimo, L., 2007, Methods for Answer Extraction in Textual Question Answering, Technical Report, Helsinki University Printing House, Helsinki [8] Mao, C. L., Li, L. N., Yu, Z. T., Han, L., Guo, J. Y., & Lei, X. L., 2009, Research on Answer Extraction Method for Domain Question Answering System(QA), In Proceeding of the International Conference on Computational Intelligence and Security, [9] Greenwood, M., A. & Saggion, H., 2004, A Pattern Based Approach to Answering Factoid, List and Definition Questions, In Proceedings of the 7th RIAO Conference (RIAO 2004), France, April, 26-28. [10] Wang, M., 2006, A Survey of Answer Extraction Techniques in Factoid Question Answering, The Journal of Association for Computational Linguistics, vol. 1, no. 1. [11] Nunes, I., Lucena, C. J. P., & Luck, M., 2011, BDI4JADE: a BDI layer on top of JADE, In Proceeding of the Ninth International Workshop on Programming Multi-Agent Systems (ProMAS 2011), Taipei, Taiwan, May, 2-6. [12] Zheng, S., Liu, T., Qin, B., et al., 2002, Overview of question answering[J], Journal of Chinese Information Processing, 2002, vol.16, no. 6, pp: 46-52.

64

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

GamaCloud: The Development of Cluster and Grid Models based shared-memory and MPI Mardhani Riasetiawan Researcher & Lecturer, Department of Computer Science & Electronics Faculty of Mathematics and Natural Science, Universitas Gadjah Mada

[email protected] Lead Researcher on Research & Working Group on Cloud and Grid Technology, Universitas Gadjah Mada www.cloud.wg.ugm.ac.id

Abstract— The research aims is develop the cloud technology architecture on the single box. GamaCloud give the examples how to use the re-use/unused hardware and the other computer components by developed a grid architecture technology, which hopes have computation capability in the high performance levels. GamaCloud is developing by design and configure the hardware component. The research consist of three phase, they are configuring on cluster, enhance the performance use grid environment, and establishing the cloud application. The research contributions are to address the green technology issues especially in re-usable technology strategies, and compact design of cloud architecture and system for small medium purposes. Keywords— cloud technology, grid architecture, GamaCloud, cluster, green technology, computer system. I. PENDAHULUAN Technology environment has face the several issues on green technology, smart and intelligence environment and research collaboration for advance research. The green technology finding the solution to use the hardware and software in clean way and long term usage. The smart and intelligent environment has objectives creating the live with intelligence technology supports. Research collaboration have design to solve the current problem in communities in the advance ways. The adaptation process of the issues has consumed a lot of energy. The enterprises, organisation, and people force the resource to plan and manage the information technology in the smart and green strategies. The un-manage use of information technology resource will contribute on waste and impacting the environments. The acts also will effects to the uses, especially establishing the risks. The close environment will continue happen when the research will not be open, but working on the independent area. The main issues on technology environment today are Cloud technology and in the business are cloud-nomics models [1,3]. Cloud technology has proven as the architecture models to creating open and share service models on every enterprise services. The cloud-nomics model gives the opportunities on out-of-the-box creation on business process models.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

The cloud technology penetration has effected on the hardware. There are many inventions on hardware, especially when we talk about tablet computers, gadget mobile, virtualization, and computation. Many of them, work together on cloud technology infrastructure, with consequences, the hardware need to connected and have bandwidth support to work normally. The issues have increasing the hardware investment. Universitas Gadjah Mada, especially Department of Computer Science and Electronics realise that the need of infrastructure, cloud ready, large volume computation capabilities has been the need to day and the futures. The research to establishing the project has started, and the research on GamaCloud have objective to establishing the PC based product that have capabilities as cloud technology and systems The paper will discuss the activities of the development of architectures. Section II will discuss the several literature and best practice in the region about the implementation of infrastructures. Section III will discuss about the propose models and phases. Section IV is discuss about the current results of models, and the summary and future works is explain in the last section HIGH PERFORMANCE COMPUTER, GRID AND CLUSTER PROJECTS CERN has established in the year 2005 as a new particle accelerator, the Large Hadron Collider (LHC)[2]. CERN have produced Four High Energy Physics (HEP). The experiments are to produce several Petabytes of data per year over a lifetime of 15 to 20 years. CERN is one of the research infrastructures involving the high performance and large-scale computer. It has implemented the technology components on worldwide Data grid. In the several reports, CERN has demonstrated the high quality technology through the large-scale deployment of end-to-end application experiments involving the real users. CERN have ability to integrate and manage large general-purpose, data grid computer clusters constructed from low cost components. They are several Grid Data projects, Globus [4] and Legion [9], which were directed on computational grids and II.

65

ISSN: 2088-6578

Yogyakarta, 12 July 2012

distributed data management. The projects are integrating the components with the infrastructures. Globus have main component, the Global Access to Secondary Storage (GASS) API. The component is performs as tasks that related to data management. The components are providing remote file I/O operations, manage local file caches and file transfers in a client-server model, and run as multi-protocols [4]. The Particle Physics Data Grids project has developing a grid infrastructure with capabilities on high-speed data transfers and transparent access [5]. It is common use as replica management, high performance networking and interfacing with different storage brokers. The Grid Physics Network a project objective is pursuing an active program of fundamental IT research to realise data and storage virtualization [6]. Storage Request Broker addresses accommodates a uniform interface to heterogeneous storage systems and accessing replicated data over multiple sites. It provides ways to access data sets based on their attributes rather than physical location using metadata catalogue [7]. Data Grid Environment has implemented in the three layers, there are network project design, workflow management, and data management components [8]. The network project design has been implemented to connect the data centre, data store and computing facilities and cluster labs in the hierarchical ways. Data core and middle-ware services have developed as the work flow management components. Data management components consist of the collaboration between storage, data services and function services. The technology environment, especially on hardware has a lot of approach. There are products that design for multifunction has competed with products that have limited functions. The use of PC router can replace by small components in the box. The network management system can manage by single network devices. The approach has showed the contemporary approach on hardware, called technology on the box. The use of hardware is wisely, environment orientation, and manageable are the focus of the smart and intelligent environment. The use of infrastructure by re-engineering the configuration, involving the re-usable components, distribution of computational power will help many entities to have high performance power in the moderate cost [9]. III. THE PROPOSE MODELS AND PHASES The research is use three models experiments. They are Cluster Models, Cluster on the box and CloudBox. The phase 1, cluster models are gathered, collect, and configure the PC's. The phase dedicated to analysis the hardware readiness for working on cluster environment. The phase is also to analyse and observe the cluster work and process in the integrated environment. The phase is resulting the basic hardware configuration to do the cluster-working environment.

66

CITEE 2012

Phase 1 Cluster Model

Phase 2 Grid Model

Phase 3 Cloud Model

Figure 1. Research Model & Phases [9] The phase 2, grid models are change the cluster model (on phase 1) into grid environment. The research does over clocking experiments to obtain the high performance of each node. The phase objective is to have re-engineering architectures that work on large-volume computation. The phase 3, cloud models are developing and implement the cloud based application that can handle large volume digital data. In this phase, we use DALA project [10] that work on the scientific digital data management and preservation. The result of the phase is the establishment of GamaCloud v.1.0 product. The phase 3 does not implement in the research. It will the future works to establishing the GamaCloud V1.0. IV. RESULT In this section, we are explain the several result of the research, such as the hardware specification an configuration, cluster models, grid models, and cloud models. Finally, we present the GamaCloud v.1.0 as the main product of the research. As mention before, the research is use re-usable technology to develop the models. Based on the previous research on CloudBox projects [10], we design and do runtest to select the PC. The first steps, the research use 15 PC and then double in this research, with different hardware specification (Table 1) A. Cluster Models The cluster models have involved 30 PC for cluster nodes. In the first experiments, the research use BCCD for the operating system. The BCCD is live CD operating system to run and learn about distributed computing in the cluster

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

modes. The research use BCCD to explore and observe the hardware readiness and configuration strategies.

No

Table 1. Hardware Specification PC_id Processor

ISSN: 2088-6578

the 30 PC working on OpenMosix configuration and shared memory implementation has shown in Figure 2. Shared memory

Memory

Node 1

Node 2

Node 30

Step 1 1

PC_Cluster1

PIII

512

2

PC_Cluster1

PIII

512

3

PC_Cluster1

PIII

512

4

PC_Cluster1

PIII

512

5

PC_Cluster1

PIV

512

6

PC_Cluster1

PIV

512

7

PC_Cluster1

PIV

512

8

PC_Cluster1

PIV

512

9

PC_Cluster1

PIV

1024

10

PC_Cluster1

PIV

1024

11

PC_Cluster1

PIII

1024

12

PC_Cluster1

PIII

1024

13

PC_Cluster1

PIII

1024

14

PC_Cluster1

PIII

1024

15

PC_Cluster1

PIV

1024

Step 2 16

PC_Cluster2

PIII

1024

17

PC_Cluster2

PIV

1024

18

PC_Cluster2

PIV

1024

19

PC_Cluster2

PIV

1024

20

PC_Cluster2

PIV

1024

21

PC_Cluster2

PIV

1024

22

PC_Cluster2

PIV

1024

23

PC_Cluster2

PIV

1024

24

PC_Cluster2

PIII

1024

25

PC_Cluster2

PIII

1024

26

PC_Cluster2

PIII

512

27

PC_Cluster2

PIII

512

28

PC_Cluster2

PIV

512

29

PC_Cluster2

PIV

512

30

PC_Cluster2

PIII

512

The cluster models has implemented the OpenMosix configuration and shared memory strategies. The OpenMosix is configuration preparation for implement the shared memory among the nodes. The architecture design of

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

OpenMosix

Network management

Figure 2. The Cluster Architectures The cluster architectures have several characteristics, they are 1. The node still use the PC model architectures 2. The cluster models develop by use share memory from the nodes to have the maximum memory capacities. 3. The node still work independently 4. The performance of cluster models limited by the specification of each PC's. 5. The integration system as cluster computing in the first level because of the different of specification in each node. 6. The cluster models need work load and balancing especially on task process. The shared memory on the OpenMosix configuration has implemented several scripts that shown in the Figure 3. → Modify /etc/openmosix/openmosix.config # pico /etc/openmosix/openmosix.config AUTODISC=0 MIGRATE=yes BLOCK=no MFS=yes → adding the notes # pico /etc/openmosix.map 1 10.0.57.31 – 160.0.57.60

Figure 3. The OpenMosix configuration B. Grid Models In the second step of the research, we develop from the cluster model into grid models. The research does reengineering of hardware architecture. The steps focus on

67

ISSN: 2088-6578

Yogyakarta, 12 July 2012

configuring the PC based in to the grid environment. We combine the science-forge project [11] to address the need of data grid environments. The grid model use for combining the software configuration between: 1. Operating system 2. Databases in storage cluster 3. Middle-ware services 4. Massive Parsing Interface applications MPI

Node 1

Node 20

Node 2

CITEE 2012

from any MPI nodes. The figure below showed the use of MPI services and configurations, just in case the research use Himeno Benchmark test for MPI. →uncompress f77_xp_mpi.zip → uncompress perftest.tar # ./configure Make all → running the measurement tests # mpirun -np 20 -machinefile /tmp/machines ./mpptest # mpirun -np 20 -machinefile /tmp/machines ./goptest → execute paramset.sh # ./paramset.sh S 1 1 2 → compile himenoBMTxpr.f # mpif77 -o himenoBMTxper himenoBMTxpr.f → running himenoBMTxpr # mpirun -np 20 -machinefile /tmp/machines ./himenoBMTxpr

Network management

Node 21

..

Node 25

→ compile -mpilog files # mpif77 -mpilog -o himenoBMTxpr himenoBMTxpr.f # mpirun -np 20 -machinefile /tmp/machines ./himenoBMTxpr Upshot # upshot himenoBMTxpr.alog

Figure 5. MPI configurations Midlew are services

Storage cluster Figure 4. The Grid Architectures Storage cluster

The grid architecture consist three layer data grids, there are MPI node layers, middle-ware services layer, and storage cluster layers. MPI layer works as application interface layers to work with the tasks. It is similar with the cluster models, more advanced; the research implemented the MPI configuration to optimising the computation power of data grids. The middle-ware service has implemented the Globus as main components, the services works on workload management, balancing, and task managements. The layer as the important layer is to communicate with the storage cluster, especially on data handling and access to the storages. The storage layer works as data management and storage facilities. The research has used the MySQL cluster as the main services. The storages works is simultaneously to process every task from the middle-ware. The storage cluster dedicated to handle large-volume data processing

68

V. SUMMARY AND FUTURE WORKS The research has running the two experiment use share memory and massive parsing interfaces for establishing both cluster and gird models. The summary of these two approaches can be defined as shown in Table 2. Tabel 3. Comparison between Cluster and Grid Models N Parameters Cluster Grid o Values 1

Hardware PC architecture architectures s

based Cluster, middleware, and storage

2

Resource By default collaboratio dedicated n

4

Computatio Not equal on each Manageable n power nodes

and Dedicated

Risks 5

Integration

Network connected

Computation connected

unit

6

Network

1 network

Multi-tier networks

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

coverage 7

Process

Parallels

distributed

The future work of the research is implementing the cloudbox models. The issues are use and configure the cloud with the high performance computing models. The architecture face to face deals with the high-level infrastructure and massive data landscape. The research will manage ACKNOWLEDGEMENT The research gives credits to CICTS UGM, Social Informatics Research Study, and Intelligent Technology Initiatives for the infrastructure and environment for the research works. REFERENCES [1] [2] [3] [4] [5] [6] [7]

[8]

[9] [10]

[11]

M. Riasetiawan.”Cloud-nomics Indonesia Outlook: Potensi Bisnis berbasis Cloud Services”. The 7th E-Indonesia Initiative Forum, ITB, Bandung, Indonesia. CERN, The European Organization for Nuclear Research: http://public.web.cern.ch/public/en/LHC/Computing-en.html. Golden Bernard,”Cloudnomics: The Economics of Cloud Computing”. Www.cio.com, accessed at April 29, 2011 I. Foster, C. Kesselman, J. Nick, and S. Tuecke. The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration WG, Global Grid Forum, June 22, 2002 PPDG : Particle Phyisic Data Grid, http://cacr.caltech.edu/ppdg/ GriPhyN : Grid Physics Network, http://griphyn.org/ W. Jhonston, D. Cannon, B. Nitzberg. Grids as Production Computing Environments: The Emerging Aspects of NASA’ Information Power Grid. Eight IEEE International Symposium on High Performance Distributed Computing, Redondo 1999. M.Riasetiawan, AK. Mahmood.”Managing and Preserving Large Data Volume in Data Grid Environment”.2010 International Conference on Information Retrieval and Knowledge Management (CAMP’10).Shah Alam, Selangor, Malaysia. March 17-18, 2010. (IEEE) M. Riasetiawan.”CloudBox: Cloud technology on the box”. The 8th E-Indonesia Initiative Forum, ITB, Bandung, Indonesia. April 2425, 2012 M. Riasetiawan, AK. Mahmood.”DALA Project”.The Second International Conference on Distributed Framework and Applications (DfmA) 2010.FMIPA Universitas Gadjah Mada. August 2-3, 2010 (IEEE) M. Riasetiawan, AK. Mahmood.”Science-Forge: a Collaborative Scientific Framework”.IEEE ISIEA.Penang – Malaysia. October 36, 2010.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

69

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Sub-Trajectory Clustering for Refinement of a Robot Controller Indriana Hidayah Electrical Engineering and Information Technology Department Faculty of Engineering, Gadjah Mada University, Yogyakarta, Indonesia [email protected]

Abstract—An explorer robot aimed to explore as much space as possible is an important technology that can be applied in many situations, especially in extreme environment, such as in outer space and under water environment. Despite of making an expensive and complex robot, the exploration task can be done collectively by some simpler robots. The main problem of such approach is how to organize the robots. In this paper, a data mining method is exploited to analyze huge trajectory database of the simple explorer robots to improve their controller. A subtrajectory clustering method is proposed to find when and where some robots move in a similar route. By discovering this regions of interest, a further analysis can be performed to find the cause of such behavior. Then, a refinement to the controller can be suggested. Keywords; data mining, sub-trajectory clustering; robot controller.

I.

INTRODUCTION

In our explorer robot experiments, trajectory database is produced. The trajectory database consists of numerical 2 dimensional x and y positions in a sequence of time, showing the movement of the robots in a workspace. This database can be considered as a time series data, that is data changing with time. Analysis on trajectory data has become important in various applications, such as analysis of moving objects in a surveillance system [10] and analysis on hurricane or wild animal movement [6]. One common problem in analysis of trajectory data is finding similar trajectories. In the context of explorer robots, finding similar trajectories made by the robots is an important task. This is due to the mission of the robots that is to explore as much space as possible. Therefore, the taking of similar route should be avoided, because it will decrease the exploration rate. Dealing with abundant automatically generated trajectory database, a data mining technique can be exploited. One way to finding similar data instances can be done by using a clustering method. Clustering is the process of organizing data instances into groups such that the data instances in the same cluster are similar to each other and data instances in different clusters are very different from each other [8]. Ability to cluster is a component of intelligence that we would like computer agents to perform. Thus, it will help human to analyze huge amount of data, because analyzing it manually is a very laborious task. Clustering techniques have been applied successfully in many fields, such as market research, pattern recognition, and image processing.

70

Clustering can also be applied to trajectory database, such as data generated by GPS positioning, to find actionable knowledge in many applications, for example air and waste management [5]. Several methods have been proposed to deal with clustering trajectory database. However, most of the works are done to the whole trajectory. In our case, we need to analyze the segments of the trajectory, or we call it sub-trajectory. Because from the whole trajectory database, we need to find sub-trajectories where some robots move in a similar shape, that is what we need to avoid to increase the exploration rate of the robots. Lee et.al. proposed a partion-and-group framework to find groups of common segments of trajectories, called TRACLUS [6]. TRACLUS is actually a density-based clustering technique that is applied to line segments rather than data points. Therefore, this framework can be used to deal with our robot trajectory database where subtrajectories can be considered as line segments. Yet, there should be some modifications to the framework to fit in the case. In this work we proposed a modification to partitionand-group framework in Ref [6]. As in TRACLUS, there are two phases in this work. Firstly, the original robot trajectory database must be segmented. Modification is done to the segmentation method, as we are going to describe in section II. Then, clustering method is applied to group the sub-trajectories based on similarity of the trajectory shape and concurrency. Here, a density-based clustering algorithm is adapted from [9]. Density-based clustering is suitable for our work because it can find arbitrary shape. The clustering algorithm will be described in section III. Based on the clusters found, useful information can be generated, i.e. when and where in the workspace those robots move similarly. The knowledge thus obtained can become a hint for refining the robot controller. The rest of the paper is organized in the following way. The formal description of the problem is stated in section II. Section III presents the segmentation method applied in this work. Section IV describes the clustering method, including the density-based clustering concept and the distance function. Result of the experiments and the analysis is presented in section V. Finally, Section VI concludes with a summary and a future plan.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

II.

PROBLEM STATEMENT

IV.

We perform a clustering task as a tool to analyze our numerical robot trajectory database. The robot trajectory database consists of trajectories, i.e. T1, T2, T3, and T4, of 4 explorer robots, i.e. robot 1, robot 2, robot 3, and robot 4 respectively. Ti consists of sequences (xi1,yi1), (xi2,yi2), …, (xit,yit) of the positions (xij,yij) where i represents the robot which make the trajectory, in this work i=1...4, during an experiment and j is the time sequence, respectively. The trajectory data is generated automatically in real time from a video of the robots moving in a workspace. The input to the clustering algorithm is a set of trajectories, I={ T1, T2, ... Ti}, where i is the number of the robots. The output of the system is a set of clusters O={ C1, C2, ... Cn}, where n is the number of clusters. III.

SEGMENTATION METHOD

Segmentation to the original data is an important phase for our task, because we concern on finding subtrajectories from the whole trajectory where two or more robots move similarly. TRACLUS has a formal segmentation method, yet it is not suitable for our robot trajectory data. It tries to find characteristic points, that is where the behavior of the trajectory change dramatically [6]. Finding such points in our case will make the line segments obtained very different in the length and happening time, as a consequence finding common subtrajectories which happen in the same time is difficult. Due to this problem, we tried to apply another method. The choice of segmentation method is effected by several considerations, i.e. the accuracy of clustering result, the compatibility with distance function, the optimality of computations, and the elimination of noise. Therefore, we firstly apply PAA method, stands for piecewise aggregation approximation as in reference [4], to compress and eliminate noise from the original time-series data then segment the PAA representation into equi-sized sub-trajectories. Given a trajectory Ti, we apply PAA to get the PAA representation of the trajectory, Ti'=(xi1,yi1), (xi2,yi2), …, (xiW,yiW). The ith element of (xiW,yiW) is calculated by equation (1). The compression ratio between the original time series data with the PAA representation is calculated by equation (2).

xi =

W w c=

w i W

j=



x

j w ( i − 1) + 1 W

w W

(1)

(2)

Then, the compressed data Ti' is divided into equisized segments of some consecutive data points to get the sub-trajectories, L. The length of the segments is specified by the user, called LengSeg. Then, all the sub-trajectories are collected in D. Thus, D={ L1, L2, ... Ls} is a set of subtrajectories which have been extracted from all raw trajectory database where s is the number of total subtrajectories obtained.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

CLUSTERING METHOD

In this section, the clustering method applied in this work is explained. As mentioned previously, a densitybased clustering method will be explored in term of line segments. A suitable distance function is chosen based on the purpose of this work. A. Density-based clustering Density-based clustering algorithm introduced by Martin et,al. is called DBSCAN [9]. This algorithm works by recognizing a cluster based on its density. Areas which are dense considered as clusters. While, areas which are sparse considered as noise. Given a space containing many points, for each point of a cluster the neighborhood of a given radius has to contain at least a minimum number of points, i.e. the density of the neighborhood must exceed some threshold. Therefore, DBSCAN can be used to find clusters with arbitrary shape.

(a) Robot trajectory

(b) Robot trajectory with clusters Figure 1. The robot trajectory consists of 4 robots A robot trajectories data of this work is shown in Figure 1a. This trajectory is generated from an experiment of 4 robots with the same controller. By applying the density-based notions of clusters we can intuitively identify several clusters as shown in Figure 1b, because the density of those areas are higher than the others. The notions for density-based clusters in term of line segments can be studied from reference [6]. The important concept in this clustering method is neighborhood of a line segment assigned as a cluster centroid, Nε(Li). This concept formally defined in equation (3). Some parameters work in the notions are ε and minLns, defined as the maximum distance between two line segments allowed in a neighborhood and a minimum number of lines in a neighborhood, respectively.

71

ISSN: 2088-6578

Yogyakarta, 12 July 2012

{

N ε ( Li ) = L j ∈ D dist ( Li , L j ) ≤ ε

}

N ε ( Li ) ≥ min Lns

(3)

(4)

B. Distance Function Another benefit of using DBSCAN is that the algorithm can work with any distance function. Thus, an appropriate distance function can be chosen for a specific application. In this work, similarity between subtrajectories or line segments is computed. To be precise, we would like to find sub-trajectories possessing similar shape. A distance function used to match between two lines that is adapted from pattern recognition field [3] is chosen. The distance function is called line segment Hausdorff distance (LHD). LHD is suitable for our purpose, due to According to Jingying et.al. [3], LHD distance function is composed of three components, that is perpendicular distance d┴, parallel distance d║, and angle distance dϴ. Given two sub-trajectories or line segments Li and Lj, then the distance d(Li, Lj) is computed using equation (5). d(Li, Lj)=w┴ . d┴(Li, Lj)+w║ . d║(Li, Lj)+wϴ . dϴ(Li, Lj)

(5)

Figure 2 illustrates the two sub-trajectories and the three distance components. The perpendicular distance, parallel distance, and angle distance between two line segments are defined in ref [3].

CITEE 2012

Time check is to check if there is a sub-trajectory that happens in different time compared to the other subtrajectories belong to a cluster. This different subtrajectory should be filtered out from the cluster. Note that we should define parameter to measure how far two line segments is, here we use δ. Sub-trajectories with δ bigger than a given threshold should be filtered out. Based on the description in the previous sub-sections, we can describe our algorithm rigorously as in Figure 3. Sub-Trajectory clustering Input Output

: I={ T1, T2, ... Ti} : A set of clusters O={ C1, C2, ... Cn}

*Segmentation phase* 1: for all Ti do 2: devide into equi-sized frames 3: for each frame do 4: compute the mean → PAA representation point 5: store the point in D 6: endfor 7: endfor *Clustering phase: use clustering algorithm in [6]* 8: Allocate every clustered L∈ D to its cluster *Check the time* 9: Reset clusterId tobe 0 10: for each L∈ Cn do 11: Let N be the first line segment in Cn 12: if N is unclassified then 13: if time distance ≤ δ then 14: assign clusterId 15: endif 16: endif 17: enfor

Figure 3. Sub-trajectory clustering algorithm V.

EXPERIMENT

In this section, the experiments and the result are explained. A. Experimental setting We use one of some numerical robot trajectories which is extracted from video of 4 robots patrolling in a workspace. The robots mission is to explore the area as much as possible before a fatal collision between the robots happen thus stopping the exploration. Figure 2. Two line segments with the three distances C. Generate sub-trajectories sequence and time check In this sub-section, we describe a task that should be done to make the clusters more informative to the designer of the robots. A Sub-trajectory sequence, in the rest of the paper will be called sub-trajectory, is a sequence of line segments belong to a cluster which are extracted from the same origin. In order to gain useful information, every cluster has to consist of sub-trajectories from different origin of trajectory. Thus, a robot designer will know that in a cluster there are several robots move in the same area.

72

The trajectory data consists of the xy-coordinates of each robot during the experiment. Therefore, there are four two-dimensional trajectories each with 4718 data points. To measure the quality of the clustering result, we compute the accuracy of the clustering by using external labeled data. The accuracy is defined as the ratio between the correct class label given by the clustering algorithm with the true labels. Let z = { 1, 2, 3, …, k} denote the set of class labels, θ(.) and f(.) denote the true label and the label given by the algorithm of data points, respectively. The clustering accuracy β can be computed by equation (6) as defined in reference [8]. 1 n  β ( f ) = max  ∑ I {τ ( f ( xi )) = θ ( xi )} τ ∈∏ z n  i=1 

(6)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

We conduct the experiment on an Intel(R)Core(TM) Duo CPU L2400 at 1.66GHz with 2.49GB of main memory, running on Windows XP. We implement our algorithm in Scilab 5.2.0. B. Result Several experiments have been done to investigate the proposed method. Figure 4 shows some clusters which are found from our robot trajectory database in several experiments by varying the value of time difference between two sub-trajectories, δ, and the length of each sub-trajectory, LenSeg.

ISSN: 2088-6578

VI.

CONCLUSION

Based on the result discussion, we can conclude as the following. 1) Applying density-based clustering on robots subtrajectories yielding enough accuracy to find the targeted goal, i.e. where robots tend to move similarly. This indicates that the clustering algorithm is suitable for the case. 2) Some inaccurate clusters found may be caused by improper choice of segment length during the segmentation phase. However, we need to study this detail more extensively to be sure. Overall, this work is only the first step toward the refinement of the robot controller. Further study is needed to specifically find the reason why the robots move similarly, thus, this condition can be avoided.

REFERENCES [1]

Figure 4.

Some clusters found, with ε=40, minLns=10, δ=10 seconds, and LenSeg=10

It is shown that the sub-trajectories grouped in a cluster indeed possessing similar shape and they happen almost in the same time, δ is set to only 10 seconds. Due to manual evaluation of the video, a fatal collision often follows this condition. However, some clusters consist of sub-trajectories that rather far different in shape. According to the experiments, longer LenSeg tend to yield lower accuracy. By varying parameters in this experiment, the accuracy of the clusters which are found by the method is analyzed. Table 1 shows the result of the experiments. TABLE I.

PERFORMANCE PARAMETERS WITH THE ACCURACY Parameters

Compression rate (c)

LenSeg

20

10

40

10

0,59

20

5

40

10

0,76

ε

minLns

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Accuracy ( β)

Gaffney, S. and Padhraic Smyth, Trajectory Clustering with Mixture of Regression Models, Proceeding 5 th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining, pp. 63-72, August 1999. [2] Gautam, D. et.al., Rule Discovery from Time Series, Proceeding on KDD-98, pp. 16-22, August 1998. [3] Jingying, C., Maylor K.L., and Yongsheng G., Noisy Logo Recognition Using Line Segment Hausdorff Distance, Journal of The Pattern Recognition Society, pp. 943-955, 2003. [4] Keogh, E. and Michael J.P., Scalling up Dynamic Time Warping for Data Mining Applications, Proceeding on KDD-2000, pp. 285289, 2000. [5] Kleiman, G., et.al., Trajectory Clustering Techniques, Air and waste management Association, Visibility Speciality Conference, October 2004. [6] Lee J.G., Jiawei H., and Kyu-Young W., Trajectory Clustering: A partition-and-group framework, Proceeding on SIGMOD-07, pp. 593-604, June, 2007. [7] Lee J.G., Jiawei H., Xiaolei L., and Hector G., TraClas: Trajectory Classification Using Hierarchical Region-Based and TrajectoryBased Clustering, Proceeding on PVLDB-08, pp. 10081-1094, 2008. [8] Liu, B., Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data, Springer-Verlag Berlin Heidelberg, 2007. [9] Martin E., Hans-Peter K., Jorg S., and Xiaowei X., A DensityBased Algorithm for Discovering Clusters in Large Spatial Databases with Noise, Proceeding on KDD-96, pp. 226-231, 1996. [10] Piciarelli, C. and G.L. Foresti, On-Line Trajectory Clustering For Anomalous Events Detection, Pattern Recognition Letters Volume 27, Issue 15, November 2006, Pages 1835–1842. [11] Yan D., Ling H., and Michael I.J., Fast Approximation Spectral Clustering, Technical Report No. UCB/EECS-2009-45, Electrical Engineering and Computer Sciences University of California at Berkeley, March 2009.

73

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Transfer Rules Algorithm for Hierarchical Prase-based EnglishIndonesian MT Using ADJ Technique Teguh Bharata Adji Intelligent System Research Group Department of Electrical Engineering & Information Technology Universitas Gadjah Mada Jalan Grafika 2 Yogyakarta 55281 Indonesia

[email protected] Abstract—It is wellknown that hierarchical phrases in sentences create difficulties to the phrase-based Machine Translation (MT) system. In this work, annotated disjunct (ADJ) is utilized to create transfer rules algorithm for solving hierarchical phrases mapping from English to Indonesian sentences. The transfer rules algorithm are employed to build hierarchical phrase-based English to Indonesian MT system as a platform to test the developed transfer rules. Keywords- ADJ Technique; hierarchical phrase; EnglishIndonesian Machine Translation; transfer rules

I.

INTRODUCTION (HEADING 1)

This paper explains the development of transfer rules for hierarchical of an MT model for two structural different languages. Recently, many statistical MT systems have improved their quality with the use of phrase-based translation [1], such as phrase-based model of the ADJ-based MT system that outperforms other available English to Indonesian MT systems [2]. DeNeefe in [3] declared the strengths of the phrase-based extraction model to increase both the phrasal coverage and translation accuracy of the syntaxbased model. In addition to those research activities, Chiang in [4] presented that a hierarchical phrase-based MT system performed significantly better than the Alignment Template System proposed by Och and Ney [5]. Meanwhile Lopez in [6] claimed that the implementation of hierarchical phrase-based MT system can efficiently find all hierarchical phrase-based translation rules for sentences in the training corpus. Those achievements encourage us to incorporate hierarchical phrase translation method into the ADJ-based method [7] as explained in the following sections, where Section 2 describes literature review. Methodology on how to build the transfer rules algorithm of the hierarchical prase-based MT system using the annotated disjunct is presented in Section 3. The results of this research are in Section 4 and the conclusion is in Section 5. II.

LITERATURE REVIEW

Although there are two proprietary English-Indonesian MT: Rekso Translator1, Translator XP2, and KatakuTM3,

but to date there is no publication found on the discussion of the translation methods suitable for English-Indonesian MT system. Reference [8] reported the use of S-SSTC (Synchronous-Structured String Tree Correspondence) to build an English-Malay MT. This work is referred since Indonesian and Malay languages are having close similarity in terms of phonetic, morphology, semantic and syntactic. In this S-SSTC, the tree structures of both English and Malay sentences need to be formalized before developing the bilingual tree-to-tree sentence mapping. However, the developed English to Indonesian MT system does not consider tree-to-tree sentence mapping, but it utilizes the ADJ for building the transfer rules [8]. Thus, this method shall reduce the processing time significantly since this technique considers the parsing algorithm for the source language only. The source word’s ADJ represents the level and the element (Subject, or Object, or Verb, etc.) of each word. Based on this ADJ, transfer rules are subsequently built with respect to the target sentence structure. Although problems arise during the decomposion of hierarchical phrases, however this problem can be solved by the implementation of hierarchical phrase-based transfer rules into the ADJ-based MT system as discussed in following lines. The objectives of this research are to build transfer rules for English to Indonesian MT system and to perform comparison between the MT system and two proprietary software. III.

When a first group phrase is found, first group transfer rules process and translate the phrase into a correct first 2

1

74

http://reksotranslator.com

METHODOLOGY

300 training sentences are used in this research. After looking through these training data then they can be assumed to have three layers of phrases maximally. We calls the first top phrase layer (or a phrase that consists of two phrase layers) as the third group phrase, the second top phrase layer (or a phrase that consists another phrase layer) as the second group phrase, and the last layer from the top (or a phrase that no longer consists any phrase layer) as the first group phrase. Each of these groups is solved by generating first group transfer rules, second group transfer rules, and third group transfer rules respectively as explained in [9].

3

http://translatorxp.com http://www.toggletext.com/kataku_trial.php

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Yogyakarta, 12 July 2012

of the second word “will” (W2). The phrase starts with “will” is not classified into any group of phrase. The evaluation goes to the first word “Where” and it is found that the phrase starts with a phrase “Where will” satisfies the condition in line 2.3. This means that the phrase started with “Where will” and ended with the punctuation “?” is of the third group phrase in which the third transfer rules is applied to translate the entire sentence into “Kemana mobil merah itu akan pergi?”. Xp

Algorithm 1 Hierarchical Phrase-Based Transfer Rules Algorithm Obtain source words and annotated disjuncts from ADJ set. 2. For each Wi (i = n to 1), do steps 2.1, 2.2, and 2.3. 2.1 If first group source phrase is found, then apply first group transfer rules. Decrement i, then do step 2.2. 2.2 If second group source phrase is found, then apply second group transfer rules. Decrement i, then do step 2.3. 2.3 If third group source phrase is found, then apply third group transfer rules.

I SIs D

1.

Q

Wq LW Where ↓ W1

will ↓ W2

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

the ↓ W3

red ↓ W4

car go ? ↓ ↓ ↓ W5 W6 W7

RW

Figure 1. An input English sentence that consists of a hierarchical phrase

IV.

RESULTS AND ANALYSIS

MT Systems Precision

80.00 60.00

50.74 58.32 58.66 67.61 69.49 69.87

100.00

precision (%)

The variable values used in Algorithm 1 can be understood by seeing Fig. 1. In this Hierarchical PhraseBased Transfer Rules Algorithm, line 1 obtains source words (i.e. English words) and their annotated disjuncts. Line 2.1 identifies all first group English phrases, which consist of words (e.g. prenominal adjectives, superlative adjectives, and noun-modifiers) that modify nouns. If the input sentence and its linkage are given as in Fig. 1, the seventh word “?” (W7) is evaluated first and it is found that “?” is not in the first group phrase. Since the condition in line 2.1 is not fulfilled then it goes to line 2.2, which checks that “?” is not in the second group phrase either. Line 2.3 also checks that “?” do not belong to the third group phrase and the step goes back to line 2 again to decrease i to allow the evaluation of the sixth word “go” (W6). Line 2.1 until 2.3 also found that “go” is not in any group of phrase. The same results obtained in the evaluation of the fifth word “car” (W5) where this word also does not belong to any group of phrase. The evaluation goes to the fourth word “red” (W4) to find that the phrase “red car” is identified as the first group English phrase since “red” has an empty left connector and A right connector. These two connectors describe that “red” is the adjective of “car”. This indicates that “red car” is an adjective-noun phrase which is then translated into the Indonesian words “mobil merah” by the first group transfer rules. The number stored in variable i is then decreased by 1, meaning that the third word “the” (W3) is now being evaluated in line 2.2, which identifies that this word is the starting point for the second group English phrase (i.e. phrases that consist of demonstrative pronouns or determiner “the”, possessive adjectives, and possessive nouns) because it has an empty left connector and a D right connector. The second group transfer rules, then translate “the red car” (the second group English phrase) into a grammatical Indonesian phrase “mobil merah itu”. The value of i is decremented by 1 to allow the evaluation

RW

A

40.00

33.18 43.91 44.91 53.90 56.36 56.64

group target phrase. The transfer rules then identify other sequence which can be categorized as the second group phrase, and translates the phrase into second group target phrase while repositioning the first group target phrase inside the second group target phrase. Finally, the transfer rules categorize other sequences of the third group phrase, translate the phrase into the third group target phrase, and reposition the second group target phrase inside the third group target phrase. The hierarchical phrase-based transfer rules algorithm can be expressed as Algorithm 1.

ISSN: 2088-6578

41.72 50.68 51.40 60.36 62.63 62.97

CITEE 2012

20.00 0.00 3-gram

4-gram

Rekso Translator Kataku TM Google Translate ADJ-based system ADJ-based phrase translation system ADJ-based hierarchical phrase translation system

5-gram phrase (n -gram) length

Figure 2. Comparison of hierarchical phrase-based system with other system

The precision of all tested systems in BLEU metric is demonstrated in Fig. 2. It is shown that the hierarchical phrase-based system precision increased with a slight difference higher than the previous phrase-based system precision for 3-gram, 4-gram, and 5-gram with 0.38%, 0.34%, and 0.28%, respectively. Most of the solved and unsolved cases that were found in the previous phrasebased system were also found in the hierarchical phrasebased system. The numbers of transfer rules generated in the hierarchical phrase-based translation system are 32 only. These are fewer than those generated in the phrase-based system. Moreover, the algorithm hierarchy makes ease the

75

ISSN: 2088-6578

Yogyakarta, 12 July 2012

effort of generating and managing the transfer rules as the rules can be classified into simple (or the first group), more complex (or the second group), and most complex (or the third group) rules. The disadvantage with the hierarchical phrase-based translation is the algorithm complexity of O(n6) compared with the phrase-based of O(n5). However, it is indeed possible to gain insight of the hierarchical phrase-based transfer rules and adapt them into the phrase-based MT system so that we can get all the advantages of the hierarchical phrase-based module and at the same time obtain the algorithm complexity of O(n5). V.

76

such as the word class are of possible approaches to address this problem. REFERENCES [1]

[2]

[3]

CONCLUSION AND FUTURE RESEARCH

The hierarchical phrase-based transfer rules algorithm can generalize the transfer rules for similar translation cases hence reduce the number of transfer rules. This will ease the effort of transfer rules generation. The accuracy depends on the number of transfer rules. Higher accuracy is expected to be achieved with more example data fed to the MT system for generating more transfer rules, instead of using only 300 bilingual sentences. The developed system accuracy is 69.87%. The hierarchical phrasebased MT system translated simple, compound, and complex English sentences in present, present continuoues, present perfect, past, past perfect, and future tenses with better precision than other systems. The hierarchical phrase-based MT system still cannot solve several phrases which are found mostly in interrogative sentences, adjective clauses, negative sentences, and passive sentences. Other issues for future work can be discussion on phrases not found in the example data, such as sayings and phrasal verbs. Ambiguous Indonesian meaning of the English words for some cases could not be solved with the disjunct annotation. Hence, ontology or other parameters

CITEE 2012

[4]

[5] [6]

[7]

[8]

[9]

Koehn, P., Och, F.J., Marcu, D. “Statistical Phrase-Based Translation,” In: HLT – NAACL, Main Papers, pp. 48-54, Edmonton, May-June 2003. Adji, T.B., Baharudin, B., and Zamin, N., “The Development of Phrase-Based Transfer Rules for ADJ-Based Machine Translation,” International Journal of Recent Trends in Engineering Vol. 2, No. 1, Nov, 2009. DeNeefe, S., Knight, K., Wang, W., and Marcu, D. “What Can Syntax-based MT Learn from Phrase-based MT?,” Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, ACL, Prague, June 2007, pp. 755–763. Chiang, D., “Hierarchical Phrase-Based Translation,” Computational Linguistics, Association for Computational Linguistics, vol. 33, no. 2, pp. 201-228, 2007. Och, F. J. and Ney, H. “Improved statistical alignment models,” In: 38th ACL, 2000. Lopez, A., “Hierarchical Phrase-Based Translation with Suffix Arrays,” in 2007 Proc. Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Jun., pp. 976–985. Adji, T.B., Baharudin, B., and Zamin, N. “Annotated Disjunct in Link Grammar for Machine Translation,” In: ICIAS (International Conference on Intelligent & Advanced Systems), KL Convention Centre, Kuala Lumpur, 25-28 Nov. 2007. Adji, T.B., Baharudin, B., and Zamin, N. “Building Transfer Rules using Annotated Disjunct: An Approach for Machine Translation,” In: 5th SCOReD (Student Conference on Research and Development), Malaysia, 11-12 Dec. 2007. Adji, T.B. “Hierarchical Phrase-based English – Indonesian Machine Translation Using ADJ Technique,” In: International Conference on Scientific Issues and Trends (ISSIT), Yogyakarta, 22 Oct. 2011.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Investigation of Insulation Coordination in EHV Mixed Line A.Eyni1 and A.gholami2 1

Iran University of science and technology electrical faculty

Email: [email protected] 2

Iran University of science and technology electrical faculty

Email: [email protected] Abstract— This paper reviews mixed lines insulation coordination that combined with overhead line and cable lines. Because of basic differences in the overhead lines and underground cable lines, insulation coordination necessary review and rate the voltage added particularly in cases dealing with section lightning overhead line or when a switching intensity to be felt. In this paper overhead line structure has been investigated that the center line cable is used. Since the high voltages switching phenomenon have been analyzed the dominant influence of this kind of added synthetic voltages on the lines. Attenuation of the surge arrester of other voltages can be added to the research in this paper using the PSCAD simulation. Keywords-insulation coordination; high voltage; mixed line; switching overvoltage; temporary ovelvoltage

I.

INTRODUCTION

In recent years the number of underground transmission projects has increased because the progress of technology about cable and insulation has reduced the cost of installation, maintenance and repairs the underground transmission lines, so that in some cases cost of these projects is comparable with overhead lines costs [1]. This progress leads to the increasing demand for installing new lines or expanding existed lines. In order to install new long lines, having a combination of transmission technology, such as series connection of overhead lines – underground cable - Overhead lines is necessary. In fact, underground cables lead to overcome the problems of area restrictions and geographical barriers. In Denmark a new 400 kV transmission line was located between two cities. Length of this transmission line is about 90 km and some natural barriers are in its path, therefore a combination of cable and overhead lines can be used. For overcome the natural barriers, two sections of 4.5 km and 2.8km cable lengths are used in the transmission line [2]. Figure (1) shows the mixed transmission line. These lines have unique features, but their high capacitance causes some problems. One of these problems is that the charging current and capacitive reactive power of cables are added in vector forms to the load current and line power and reduce the maximum permitted line length and transmission capacity of active power. Two major problems occur in the cable systems are as follows: 1) Switching transient overvoltage 2) Resonance and harmonic resonance

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 1. Mixed transmission line

In this paper switching transient overvoltage will be analyzed. In part II the background of theory and the problems are described. After the problems description, the effect of arresters on attenuation overvoltage in the various places of combination transmission system will be analyzed. II.

DESCRIBING THE SYSTEM

Combination transmission lines can have a variety of scenarios. In one of these cases cable lines are in the two bottoms of overhead lines. This case is used when two ends of overhead line end to post, because cable is generally used in the output of transformers located in posts. One of the other most common states of combined lines is a system which two bottoms of cable line end to overhead line. Switching pulse are from performance of breakers and switches in the system. The switching operation divides into three parts: A) Energization B) De_energization C) Reclosing Diagnosis between the first and third part is very important because reclosing operation will be done soon. Energization phenomenon includes powering the components of system like transmission line or cable, transformators, reactors, capacitor banks, etc., without any energy downs in a trap (trapped charge). But in reclosing it is possible that line will be threatened by trapped energy which mostly happens in cables. In this case, overvoltages can be reached to high quantities up to 4 pu. Deenergizing includes debugging and cutting the power and etc [3].

77

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

                

After solving the above equations, V and I equations are obtained as follows:      !     " # Figure 2. Topology of mixed line

A. Mathematical equations Transient state analysis method is based on wave theory of multi-conductor systems. In the over voltages problems caused by switching, ignoring the problems caused by pollution (soil and treating) such as the classic release of wave along coaxial line; is possible although it needs sufficient accuracy for most practical purposes. In this theory, the transmission system is defined in the form of transmission impedance matrix Z series and parallel Admittance matrix Y. Z and Y form are shown below:               

In which R, L, G and C, are parallel capacitance, conductivity parallel, series Raktans and series resistance, respectively. These quantities are n dimensions matrixes when n is the number of parallel conductors of cable systems. Variable ω shows that all of the calculated quantities are functions of frequency. Electromagnetic behavior of cable system with n parallel conductors in the field of frequency is defined by two following matrix equations:     

     

In which V and I are n dimensions vectors and shows voltage and current in the x distance from the cable, respectively. It should not be forgotten that some basic premises are assumed like that the cable is uniform in all its length. As it already mentioned relations R, L, G and C are calculated as functions of ω and geometric qualities and material properties. The problem is calculating the voltage and current using the previous equations, which also depends on the ω. In harmonic issues where only one frequency is considered, ω is presented as angular frequency and also the V and I are formed in phases. To determine the current and voltage vector of the previous equations, the second derivative can be written as any of the followings:

78

  $ "% $&'   !     " ()

 ! and  " are vectors which inherited 2n constant from integral and can be obtained from boundary conditions. In order to analyze transient problems, above equations are being changed in the field of time, by the inverse of Fourier transformation. * 

!,   *  . + "-

For calculating the current in the field of time, like equation (9), current equation in the equation (8) is being calculated by the inverse of the Fourier transformation. III.

SYSTEM DETAILS

System studied consists of two parts as cable and overhead line. Cable information is gathered in the table I. Details of the overhead lines are available in the table II. The length of cable long is 12 km which this distance has been divided to the same 1 Km length cables. It means that the cable section consists of 12 pieces which are connected together with connecting pieces. These connecting pieces are suitable places for installing compensation equipment, measuring equipment, or even surge arrested [4]. TABLE I.

THE PARAMETERS OF POSITIVE COMPONENT OF THE CABLE PART IN THE STUDIED SYSTEM

Line Type

Cable XLPE

Nominal voltage

500Kv

length

12 km

Cross section Apparent resistance

2500 //

13.3 m /Km

Shunt leakance

51.5 nS/km

Indoctance

0.576 mH/Km

Capacitance

0.234uF/Km

Ampacity

1788 A

Cable type and how it is located in the ground and also the circumstances surrounding the cable affect the cable

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

parameters to some extent and should be considered in calculation and simulation. These data's are shown in table III. TABLE II.

THE PARAMETERS OF POSITIVE COMPONENT OF THE OVERHEAD LINE IN THE STUDIED SYSTEM

Line type

ACSR single line

Length

12km-20km

Nominal voltage

500Kv

Apparent resistance

23.1 m /Km

Shunt leakance

10 nS/km

Indoctance

0.858 mH/Km

Capacitance

0.0133uF/Km

Ampacity

2220 A

ISSN: 2088-6578

05 

26 "24 26 !24

(11)

In these equations, 78 and 79 are, respectively, the receiver and transmitter impedance and 7: is surge impedance of line. In this case, the switching phenomenon is considered at the beginning of the line which means switching pulse is created in the overhead part of transmission line, transferred to the cable part and shoot at the end of the line.

Figure 3. Equivalent circuit of combined line

Used cables are single centered and buried in the depth of a meter in the ground. Cables can be installed inside tunnels. In this case, cable will exchange heat with the surrounding air and heat resistance of the soil is not considered. TABLE III.

INSTALLED CABLE STATUS OPERATING

Opretion frequency

50 Hz

Conductor temperature

85 ◦c

Air Temperature

25 ◦c

Soil temperature

35 ◦c

Earth resistance

175 ◦c/w

Load factor

1

Cables away from the bed

200mm

Burial depth

1m

A. Ladder diagrams In the previous section details of the system were brought. This section is trying to use ladder diagrams in order to verify how and calculate the amount of over voltages which are produced by switching and transformation of switching pulse from overhead section to cable section. When the switch of applied voltage to the line is closed, voltage wave with current wave will start to move on the line. When these waves arrive to the bottom of the line, due to status of the line's bottoms, they reflect [5]. Also if the switching wave transfers from one environment to the other environment with different surge impedance, one part of the wave reflects and other part of it will be transferred to the other environment. Equations (10) and (11) represent reflection and transmission coefficients respectively: 01 

23 "24 23 !24

According to the information of table II surge impedance for overhead line is obtained as following: 7;<=>[email protected]    . BC D;<=>[email protected]  EE   ) F

I G%  H  #K   LC J With data of circuit's cable part from table I, the surge impedance for the cable line will be in the form: [email protected]=    )E. BC [email protected]=  EE   # F

I G  H  .#K  E)LC J Now with having the impact of impedance for the studied system and with using the relations (10) and (11), radiation and reflected coefficients in each part of the system can be calculated. Finally, the ladder diagram of system will be shown as figure (4). Transfer and reflect coefficients of each section of the line are separately calculated and gathered in the table IV. As the table implies, transient pulse are moderated and reducing their harmful effects when crossing overhead line to the cable. Due to low current impedance ratio on the overhead line connected to it, something similar occurs for the current in this section; which means transient pulse which arrive to the head can be adjusted. As well as Section 2, the rate of over voltages caused by switching may also reach to 4 pu.

(10)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

79

ISSN: 2088-6578

TABLE IV.

Yogyakarta, 12 July 2012

TRANSMISSION AND REFLECTION COEFFICIENTS OF STUDIED SYSTEM

Transfer coefficient 0.31 1.69 0.23

Reflection coefficient -0.69 0.69 -0.77

0% 0 0P

CITEE 2012

Due to the obtained results of mathematical analysis of the problem, in order to prevent the damage of equipments from effect of switching over voltages, using arresters with suitable characteristics for the system is required. Arresters can be placed in different parts of their exposure, but the optimum location is important. This point should be considered that in the cable part of the system, only in the section of the cable connection, using technical and practical tips; is the possibility of installing arresters. With these descriptions, two major locations for installing arresters are chosen. First location, arresters are installed at the two ends of cable line; and the second location, installing arresters at the tow ends of overhead line. These places are shown in the figure (5). In this paper only the case of installing the arrester at the two ends of overhead line is simulated, and the results are given in the next section. IV.

SIMULATION

In this section, simulation of the previous materials which were introduced in theory, is paid. At first, a switching phenomenon will be simulated. Switching has two parts: opening the switch and closing it. In both cases over voltages can be seen. Switching is done by the power switch which is at the beginning of the line. Switch settings are in the way that they open or close in 0.2 seconds. Figure (6) and (7) display the created wave while closing and opening the switch, respectively. Main : Graphs

Figure 4. ladder diagram of transient switching waves

Load Voltage

1.0k 0.8k 0.5k 0.3k kV (Volt)

B. Use the arresters Although increase in the resistance insulation of network equipments, increases the reliability and stability of the network, but it leads to the big project and increase in the economic costs. Therefore, the issue of reducing the over voltages and its methods will be discussed. In order to protect the equipments of HV from system's over voltages, arresters are used often [6]. Currently, the ZnO arresters have the best performance in protecting power equipments from over voltages.

0.0 -0.3k -0.5k -0.8k -1.0k 0.180

0.200

0.220

0.240

0.260

0.280

0.300

... ... ...

0.50

... ... ...

Figure 6. over voltages cause by closing switch

Main : Graphs Load Voltage

0.6k 0.4k 0.2k

kV (Volt)

0.0 -0.2k -0.4k -0.6k -0.8k -1.0k -1.2k 0.00

0.10

0.20

0.30

0.40

Figure 7. over voltages cause by opening the switch

It can be seen that over voltages created by opening the switch is over 2.5 times of the amount of voltages. While closing the switch, this over voltage is less established and is about twice the voltage of system. Figure 5. Position of arresters: a) at the end of the cable line b) end of overhead line

80

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Arresters used in the system adjust these over voltages. How to reduce the over voltages by the arresters, is shown in figure (8) .

ISSN: 2088-6578

endurance of equipments, in order to imposed over voltages create no disruption in the performance of system. Other, reducing and controlling the range of over voltages created in system by limiting and protective equipments like arresters.

kV (Volt)

Main : Graphs 500 400 300 200 100 0 -100 -200 -300 -400 -500 -600 0.00

Load Voltage

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.90

1.00

... ... ...

In this paper over voltages caused by switching on the overhead transmission lines which cable lines are used in some parts of them, has been paid. Then the effect of arresters on these over voltages is shown by simulation. The results of this paper imply that in combined lines; over voltages will be attenuation after arriving the cable part of transmission line and their risks will also decrease. But this attenuation is not enough to not requiring the protective equipments.

Figure 8. effect of arresters on attenuation of over voltages while opening the switch

REFERENCE [1]

Main : Graphs 600

Load Voltage

400

[2]

kV (Volt)

200 0

[3]

-200 -400 -600 0.160

0.180

0.200

0.220

0.240

0.260

0.280

0.300

0.320

Figure 9. effect of arresters on attenuation of over voltages while closing the switch

V.

... ... ...

[4] [5]

CONCLUSION

Unwanted over voltages is the most important factor of threatening isolation of equipments in power networks. In designing a power system, the access to a network with high reliability and stable from performed over voltages like lightning and switching, is important and gaining this goal is possible in two ways: One, increasing the electrical

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[6]

R. Benato, A. Paolucci “Operating capability of ac EHV mixed lines with overhead and cables links” Elect. Power Syst. Res 00 (0) (2005) 00-00. P. Argaut, K. B. Larsen, E. Z. Prysmian, A. Gustafsson, F .Schell, V.Waschk “Large Projects Of EHV Underground Cable Systems” paper A.2.5, Jicable 2005. S. Rahimi, W. Wiechowski, M. Randrup, J. Østergaard and A. H. Nielsen “Identification of Problems when Using Long High Voltage AC Cable in Transmission System I: Switching Transient Problems” , Transmission and Distribution Conference and Exposition, 2008. T&D IEEE/PES21-24 April 2008. IEC 6007 1 - 1 : Insulation Co-ordination, Part 1 : Definitions principles and rules. 7th edition, 1993 IEC 141-1: Tests on oil-filled and gas-pressure cables and their accessories - Part 1 : Oil filled, paper insulated, metal sheathed cables and accessories for altemating voltages up to and including 400 kV. 3rd edition, Lam-Du, S.; Tran-Quoc, T.; Huyn-Van; Sabonnadiere, J.C.; VoVan-Huy, H.; Pham-Ngoc, L “Insulation coordination study of a 220 kV cable-line” Power Engineering Society Winter Meeting, 2000.IEEE Volume 3, 23-27 Jan. 2000

81

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Unbalance and Harmonic Analysis in a 15-bus Network for Linear and Nonlinear Loads T.Sridevi1 , Dr.K.Ramesh Reddy2 Dept. of Electrical and Electronics engineering

G.Narayanamma Institute of Technology and Science, Hyderabad-500008, INDIA [email protected] 1 [email protected] 2 Abstract—For resolving the harmonic problem, the linear and nonlinear load modelling and its incorporation into the power-flow formulation are required. Distorting loads such as six and twelve pulse converters injects harmonic currents into the system and the load impedance values have an influence in the distribution of these currents. Line voltage symmetry will be disturbed by the asymmetries. To get the accurate analysis of unbalance and harmonics, the network must have unbalanced conditions, 3- models of the load, linear consumptions and AC/DC converters. Present work analyses the unbalance voltage factor and harmonic distortion by incorporating six 3- loads with different impedance configurations and ideal six and twelve pulse converters supplied by unbalanced voltages. Linear and nonlinear loads are incorporated to a 15 – bus network & designed using MATLAB/Simulink. Keywords-unbalance factor, harmonic unbalanced loads, constant impedance type loads.

I.

analysis,

INTRODUCTION

Due to the increased presence of non-linear devices in power electric networks, there is a growing interest in harmonic power flow studies. The linear and non-linear load modeling and its incorporation to the load flow formulation are needed for the resolution of harmonic problem. During the operation of electric power supply systems the 3- voltage at the terminals of the loads is expected to be symmetric, but asymmetries such as untransposed transmission lines, unequal 3- loads, large 1- loads etc can produce a certain disturbance on the line voltage symmetry. So in order to have more accurate harmonic power flow problem analysis, the network must consider unbalanced conditions, different load impedance model configurations, linear consumptions and AC/DC converters. This work consists of a 15-bus network having different topologies of P-Q loads and six & twelve pulse converters under unbalanced conditions in order to analyze the expressions for the harmonic currents which are injected into the network[1]. For different cases the conclusions were drawn based on the simulation results. II.

LINEAR & NON-LINEAR LOAD MODELLING

This work studies the 3- modelling of P-Q linear consumptions. Different topologies of P-Q loads are modeled & incorporated to 3- harmonic load flow study. The ideal six & twelve pulse AC/DC converters are studied under unbalanced conditions in order to obtain analytical expressions for the harmonic currents which are injected into the network. The 3- harmonic load flow formulation developed considers no harmonic interaction between network and non-linear loads to incorporate easily the load modelling and to study their influence in

82

the harmonic voltage distribution problem is solved sequentially as •

in the system. The

AC-DC load flow problem It provides the fundamental voltage in all the load buses and converter firing angles for non-linear loads. These calculations provide active &reactive power flows, bus voltage magnitude and their phase angles at all the busses for a specified power system and this information is essential for the continuous evaluation of the current performance of the system and for analyzing the effectiveness of alternative plans for system expansion to meet increased load demand.



Harmonic penetration Harmonic penetration provides the harmonic voltages. For selected bus-bars when the system is subjected to current unbalance and circuit configuration changes, then this results impedance of sequence voltages.

A. Constant impedance type loads Single, double and three phase loads could exits at any bus. Three types of loads can be specified and any combination of them may exist at each bus. Out of three types of loads (constant power, constant current & constant impedance) constant impedance loads are selected for the present work. These types of loads may be connected in delta (∆), Υ with floating neutral, Υ with grounded neutral or Υ with neutral grounded through an impedance. ∆ connection is equivalent to Υ with floating neutral and hence treated in the same manner. P-Q load fundamental impedances per phase F, for the P-Q consumption buses c, will be calculated from the fundamental voltages and currents of the 3 phases (H=A,B,C) and the load type tl, as Z1Fc = f(V1Fc, I1Fc, tl) = R1Fc+ jX1Fc ------ (1) The harmonic impedances per phase can be calculated by multiplying the harmonic order with reactive term. Ground connected impedance Υ (PQ.ZN), Ground connected Υ (PQ.NT) and Isolated Υ (PQ.NA) are the three 3- load configurations that were considered. Load unbalance could be caused by balanced 3- load running at a fault condition such as phase open or short fault. Source unbalance may be caused by a large load unbalance & non- uniform source output impedance. For PQ.ZN and PQ.NT, expressions that has to be verified are V1Fc= Z1Fc I1Fc+ Z1Nc(I1Nc) SFc =

P1Fc

+Q

1

1

Fc

1

= V Fc( I Fc)

------ (2) & *

------ (3)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

For PQ.NA, loads must be verified such that I

1

1

Ac+I Bc+

1

I

Cc

= 0. ------ (4)

ISSN: 2088-6578

Simulated values were compared with calculated values for all the cases and sample calculation is given for one case.

By that this type of load results constraint on load flow as III.UNBALANCE -------- (5) This type of load causes open circuit faults. For the present work, ground connected load(PQ.BIF.NT) & isolated load(PQ.BIF.NA) are considered under bi-phase load configuration. Ground connected load (PQ.BIF.NT) is a particular case of PQ.ZN where c phase is open. As the consumption of non connected phase is null SHc=0 . This type of load leads to L-L-G faults. In PQ.BIF.NA in between F and H a load is connected .The impedance value can be determined with the help of potential difference

FACTOR

AND

TOTAL

HARMONIC

DISTORTION

Voltage unbalance is mainly affected due to load system rather than power system. Unbalanced voltage will draw a highly unbalanced current and results in the temperature rise and the low output characteristics at the machine. It is necessary to analyze correct voltage unbalance factor for reduction of side effects in the industrial sites. For that voltage unbalance factor at each node is checked. Voltage unbalance factor is the ratio of the negative sequence voltage component to the positive sequence voltage component.

is the restriction imposed on load flow due to the expression I1Fc+ I1Hc = 0. In PQ.NT with two non-connected phases we can obtain a single phase load (PQ.MON). As B and C are the non connected phases currents are zeros in them. This type of load causes L-G faults. The series load model is best used to represent individual loads [2]. To reduce unbalance in the system, we have connected grounded wye to grounded wye transformer at generator side and delta to grounded wye transformer at the load side. The voltages are transformed to phase connection during simulation as the given data are in line to line. B. Non linear loads In the present 15 bus network, two non-linear loads (six pulse converter & twelve pulse converter) with ID and PTOT are used. From the equivalent six pulse converter circuit considering ideal rectifier behavior, the current ID has no ripples & the thyristor commutation is instantaneous. With unbalance supply voltage of the converter, the instantaneous line to neutral emf’s of the sources are where F=A, B, C RMS value of the source current

Output voltage

is the three phase current sum. Unbalance factor and 3- current sum of the different loads are summarised as TABLE I.

UNBALANCE FACTOR & CURRENT OF DIFFERENT LOADS

LOAD TYPE

4

PQ.ZN

1.5841

99.1

6

PQ.NA

0.4465

0

8

PQ.NT

0.4821

119.16

9

PQ.MON.NT

5.238

283.7

10

ACDC.6P

5.443

0

11

PQ.BIF.NT

5.648

740.45

13

PQ.MON

0.948

2779.6

14

ACDC.12P

0.936

0

15

PQ.BIF.NA

0.924

0

where

is the maximum value of line voltage & α=firing angle of thyristor RMS value of the voltage ------- (6) Line to line voltages, there by phase voltages and phase currents can be determined. Harmonic current calculations for the converters are

The current sum is found null for the isolated loads like PQ.NA, PQ.BIF.NA, six pulse converter and twelve pulse converter. The current sum is observed high at the single phase load PQ.MON connected at node 13. Output from Node 4 for both 3- current sum Fig.1 and unbalance factor Fig.2 are shown as sample.

Figure 1. 3-

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

I1

NODE

current sum from node 4

83

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Figure 2. Unbalance voltage factor at node 4

Figure 3. Considered 15 bus network.

84

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Figure 5. Converter Voltage waveform

In order to define the level of harmonic content of the alternating signal THD is used. THD is the summation of all harmonic components of the voltage or current waveform compared against the fundamental component of the voltage or current wave. It is commonly defined as an amplitude ratio rather than power ratio. -----(9)

Figure 4. Simulated THD voltages (sample)

The fundamental current is that is 0.78ID for six pulse converter and 1.56ID for twelve pulse converter. With obtained value of firing angle = 25.11 degree; typical values of characteristic harmonics at different orders are observed for six and twelve pulse converter. Firing angle for both 6 and 12 pulse converter are same as both are connected to interconnected system TABLE II.

TOTAL HARMONIC DISTORTION FOR DIFFERENT PHASES

H

6 Pulse

Figure 6. Six pulse Conerter Current waveform

12 Pulse

A

B

C

A

B

C

3

2.42

2.85

5.274

0.55

0.56

0.16

5

21.1

21.2

16.85

0.23

0.23

0.23

7

12.9

12.6

16.15

0.23

0.23

0.23

9

2.41

2.83

5.142

0.55

0.56

0.18

11

10.0

10.1

5.425

9.16

8.67

9.37

For the six pulse convertor, current was found maximum at 92 A and voltage wave raises peak at 584 volts and for 12 pulse convertor current was found maximum at 121 A and voltage wave raises peak at 584 volts.

Figure 7. Twelve pulse Conerter Current waveform

Typical FFT windows for phase A of six pulse converter and phase B of 12 pulse converter are shown.

Selected signal: 5 cycles. FFT window (in red): 1 cycles 100 0 -100 0

0.02

0.04 0.06 Time (s)

0.08

0.1

M a g (% o f F u nd am e nta l)

Fundamental (50Hz) = 120 , THD= 27.13% 20

15

10

5

0

0

200

400 600 Frequency (Hz)

800

1000

Figure 8. FFT Window of 6-Pulse Converter (Phase A)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

85

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

IV. CONCLUSION For the 15 bus network, considering linear loads and non linear loads voltage unbalance factor and harmonic analyzation were done. It is observed that the total power in the network is less that 20 MVA as the maximum power at the generator is 20 MVA. High unbalance voltage factor at single phase load, 3- impedance connected load (PQ.ZN), bi-phase load(PQ.BIF.NT)and six pulse converter were observed. As PQ.ZN is unbalanced load , high voltage unbalance factor resulted at that load whereas the other loads [PQ.BIF.NT & PQ.MON] produced high unbalance factor due to their interconnection with 6-pulse converter. Sum of the currents are simulated at particular loads. It has been observed that the 1- load draws high current. Isolated loads such as 3- isolated load (PQ.NA), bi-phase load(PQ.BIF.NA) draws zero current. The non symmetrical voltages introduces non- characteristic harmonics and the distortions for different phases of non linear loads were studied. Further current unbalance factors [3] can be determined and analyzed for utilization of the data in harmonic mitigation circuits. Selected signal: 5 cycles. FFT window (in red): 1 cycles

APPENDIX(CALCULATION OF PARAMETERS)

100 0 -100 0

0.02

0.04

0.06

0.08

0.1

Time (s)

M ag (% of F un dam e ntal)

Fundamental (50Hz) = 120 , THD= 8.72%

8 6 4

For calculation of line parameters, we require positive, negative and zero sequence parameters. Positive, negative and zero sequence components for Generator, transformers and transmission lines are determined as they are important in determining the fault currents in a three phase unbalanced system. From the given data, sequence components of inductance and capacitance are calculated and used in simulation. For different load calculations[4], the individual phase currents under the presence of harmonics are calculated from

where

2 0

0

200

400 600 Frequency (Hz)

800

Under the presence of

1000

Figure 9. FFT Window of 12-Pulse Converter (Phase B)

Harmonics ZS = RS + j hXS. For S=A,B,C these values are calculated and compared with simulated values. For six pulse converter, ID = 92A; P TOT =45Kw

RMS value of voltage = 403.3V For calculation of line-to-line voltages

phase voltages are calculated from line voltages and individual currents for different phases. Sum of the currents; IA+ IB+ IC =92– 46 -46=0A Simulated value = 0A ( from table 1)

86

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Harmonic calculation: RMS value of “n” harmonic current

Fundamental current As 5th & 7th harmonic are the predominant harmonics in 6-pulse converter Phase –A

Simulated value of 5th harmonic of phase A =21.102 [Table 2] 7th harmonic:

Simulated value of 7th harmonic of phase A =12.919 [Table 2] REFERENCES [1]

[2]

[3]

[4] [5]

[6]

L.Sainz,J.Clua, Jordi “load modelling for unbalanced power flow studies” 8th lntentolional Conference on Harmonics and Quality of Power ICHQP '98, joint& organized by IEEE/PES and NTUA, Athens, Greece, October 14-16,1998. T. J. Demsem, p. S. Bodger. J. Arillaga, “three phase transmission system modeling for harmonic penetration Studies”, IEEE Trans. On power apparatus and systems.vol, Pas-103 no.2, february1984, pp,310-317 M. Grotzbach, W. Dirnberger, R. Redmann, “Simplified predetermination of line current harmonics”, IEEE industry applications magazine, March/April 1995, pp. 17-27 . Power Systems Harmonics by George.J.Wakileh H. Ying-Yi,W.Fu-Ming , “Investigation of impact of different three-phase transformer connections and load models on unbalance in power system by optimization” , IEEE Trans. On Power Systems, Vol 12, No. 2 , May 1997, pp.689-697. http://www.mathworks.in/help/index.html

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

87

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

The Maximizing of Electrical Energy Supply for Production of Steel Plant through Fault Current Limiter Implementation Haryanta* [email protected]katausteel.com

Abstract — This paper present as the result of improvement activity on the protection system of medium voltage 30 kV of Steel Plant, in order to maximize the power supply based on utilization of transformer capacity. Some efforts to maximize the production of steel plant have constrain due to the electrical power supply when covered the peak load especially. The cross section arrangement of electrical network on tie buses as solution and offered the opportunity to increase the production. Regards these arrangement on tie bus rise the risk which come from the short circuit current, if shall be covered by the existing protection devices such as fuses and circuit breakers (CB). These devices unable protecting inrush current from other buses were connected. The reliability of protection system of the bus increased up to 0,65% through the fault current limiter (FCL) implementation on the bus tie. The FCL had certain rating installed in serial between twice bus to these interconnection network. It will effectively limiting the fault energy and better balance of the transformer load. This improvement already implemented in the Steel Plant of PT KS and increased the capacity of electrical power supply on 30 kV tie bus AH, without upgrade all of the bus protection’s devices such as fuses and CBs. After improvement the increasing of power supply increased as proportional to the production rate of steel plant comprise BSP and SSP I up to 15% each plant on average than production of each plants without tie bus. Keywords-component; Fault Current Limiter (FCL),

Current Limiter, Serial Reactor, Tie Bus, ETAP

device permanently, so that is must be replaced. The cost of equipment like circuit breakers and transformers in power grids is very expensive. The ever increasing demand for power delivery lead to the installation of interconnection between the power grids. In this case of a fault, such as a short circuit, the amount of power captured by the short circuit is enlarged and therefore the peak values of fault currents are increased. Fault Current limiters (FCLs) are expected to play an important role in the protection of future power system due to the rising levels of the short circuit currents. The 30 kV electrical network, which is directly supply to the electric arc furnaces of steel plant, had some problems which occurred frequent due to the faults in the electrical network. The faults coming from short circuit in the electrical network and the protection devices on the load side had failed to protect it’s over current. Sometimes after disturbance, the protection devices were broken and should replaced to recover the condition. It cause the reliability of power system network decreased and a lot of time of opportunities the production were loss. II.

THE BUSES OF STEEL PLANT ON OVERVIEW AND POTENTIAL PROBLEMS

The steel plant in this case comprise of billet steel plant (BSP) and slab steel plant I (SSP I), as the part of integrated steel producer of PT KS. Some efforts to maximize the production of steel plant meet the constrain of electrical energy supply. The case is how to increase the potential energy power and especially covered the peak load. Electric power supply was limited caused by each plant just supplied by each bus transformer when the plant on operation. The arrangement of buses which is supply to the steel plant of PT KS as shown below,

* Maintenance Supt. of DR plant of PT KS and Lecture of Electrical Engineering of STT Fatahillah Cilegon.

I.

KDL

G G 1 1 1 1

AD 150kV

INTRODUCTION

Electrical energy is one of the most important forms of energy and is needed directly or indirectly in almost every field. Most of the transformer’s capacity were not utilized to supply the loads optimally so the spare of power capacity let free. Increase in the demand and consumption of electrical energy leads to increase in the system fault levels. It is not possible to change the rating of the equipment and devices in the system or circuit to accommodate the increasing fault currents. The devices in electronic and electrical circuits are sensitive to disturbance and fault may damage the

88

G PLN

others

AH 30kV

SSP I AN 30kV

BSP AM 30kV

AJ

30kV

SSP II

HSM

AL 30kV

AF 30kV

ROUGHING & FINISHING MILL

Fig. 1 Electrical power supply of integrated steel plant PT KS [5] Main feeder steel plant of AH 30 kV have 2 (two) branch at the same platform of grid voltage likes

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

AN 30 kV SSP I and AM 30 kV BSP. The twice buses in this case was connected and possibly the power flow across to bus each other to covered the increasing of power demand. The arrangement of the Steel Plant bus as power supply to BSP and SSP I of PT KS plant shown as single line diagram below,

ISSN: 2088-6578

Table 2. The data of furnace transformer of BSP [5] Transf ormer

Cap. (MVA)

Max Cap. (MVA)

Connec / Devide

T.EAF I

60

66

T.EAF II

30

36

T.EAF III

60

66

T.EAF VI

60

66

T.LFu rnace

15

15

∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,320 kV

Impe dance (%) 7,25

7,25

7,25

7,25

7,25

Fig.2 Single line diagram of Steel Plant AH [5] The demand of electrical power on billet steel plant (BSP) and slab steel plant I (SSP I) supplied by bus of Steel Plant AH 30 kV, connected to 5 feeders as incoming and 9 feeders as outgoing and 1 feeder as spare.

Table 3. The data of furnace transformer of SSP I [5] Transf.

Cap. (MVA)

Max Cap. (MVA)

Conn / Devide

T.LF I

30

36

T.LF II

30

36

T.EAF V

60

66

T.EAF VI

60

66

T.EAF VII

60

66

T.EAF VIII

60

66

∆-∆ 30/0,420 kV ∆-∆ 30/0,420 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV ∆-∆ 30/0,544 kV

Table 1. The data of main transformer Steel Plant AH [5] Transformer AV.01

Capacity (MVA) 80

AV.02

80

AV.03

80

AV.04

80

AV.05

80

Connection / Devided Y-∆ 150/30 kV Y-∆ 150/30 kV Y-∆ 150/30 kV Y-∆ 150/30 kV Y-∆ 150/30 kV

Impedance (%) 9,5 9,5 9,5 9,5 9,5

The type of fifth transformer of steel plant AH, comprised AV 01 to AV 05 are similar type included the capacity, connection and impedance value, which are dispatched into 2 (two) units supply to BSP and 3 (three) units supply to SSP I. These capacity not utilized optimally when the bus BSP and SSP I connected independently to the load. As loads of power grids on the steel plant comprising BSP and SSP I are electric arc furnaces (EAF) which it supplied directly by furnace transformer with each data shown below,

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Imped ance (%) 7,25

7,25

6,2

8,6

8,6

8,6

Some effort had been performed to increase the capacity of electrical power network and reliability of protection system to fulfill the demand of electrical power. The optimizing of production on BSP and SSP I had done, through the optimizing of transformer capacity utilization without the new investment for transformer. In these case had a risk as consequently if it still used conventional protection devices likes fuse and main circuit breaker (MCB). The twice of devices unable protect the network when the short circuit in the electric networks. It’s fault was rising frequent from the load

89

ISSN: 2088-6578

Yogyakarta, 12 July 2012

side. The most dangerous of fault current especially at first 0,5 cycle which is the amplitude too high (up to 4 times of full load). The short circuit cause a blackout to the whole systems included the plants were connected. In this case, some of protection devices were broken and must be replaced such as detonator of current limiter as expensive part. The other case is relating with the operational constrain which are each plant supplied by a bus. BSP supplied by 2x80 MVA so as the effective capacity is 128 MVA and SSP I supplied by 3x80 MVA so as the effective capacity is 192 MVA. Based on the effective capacity, the operation of BSP has 2 (two) EAF and 1 (one) ladle furnace, and the operation of SSP I has 2 (two) furnace and 2 LF or 3 furnace and 1 LF. These mode does not the optimal production referring the available of plant facility and cost of product competitively. III.

CITEE 2012

This application provides full protection during a fault and load continuity following a fault. Bypassing FCL can eliminate the continual losses and regulating the voltage drop associated with current limiting reactor. The FCL concept and its characteristics are, a. The saturated core of FCL utilizes the large different permeability of magnetic material. b. High permeability material allow a low impedance during normal operation and very high impedance during fault current events condition. c. FCL as fault impedance device which it define as the steady state equivalent impedance which would result in the same fault current limiting effect, until 50% fault current reduction.. The characteristic core of serial reactor regarding the intensity of flux - magnetomotive force (B - H) and permeability - magnetomotive force (µ - H), drawn like below,

THE SERIAL REACTOR

Serial reactor as fault current limiter (FCL) using high temperature superconductors had been commonly applied in protective system in decades. They are effective for controlling peak current and limiting fault energy on utility distribution and transmission networks. Two types commonly used on the electrical protection network are current limiter and serial reactor. The equivalent circuit of serial reactor like arranged below,

Fig.4 The curve B - H and µ - H of serial reactor [3,4] Fault current limiter devices can be applied in a number of distribution or transmission area. Three main applications area include in the main position to protects the entire bus, in feeder position to protects an individual circuit and in the bus tie position. Fig. 3 The equivalent circuit of serial reactor [4] The explanation equivalent circuit of serial reactor is LR as poor inductance, in serial with RR (total of resistance at high frequency) and the capacitor connected to ground ( C1 as input and C2 as output) and capacitor which connected to terminal CR. The benefits of an FCL in these application is a large of transformer capacity on the other bus can be used to meet increased demand of peak load power on a bus without breaker upgrades. It also introduces the concept of using FCL for power quality enhancements on critical circuit where a bus fault may cripple an adjacent unfaulted bus.

90

The two buses are tied, yet a faulted bus receives the full fault current of only one transformer and the arrangement like shown below.

Fig.5 The installation of FCL in the bus tie position. [4]

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

The benefit of an FCL in the bus tie position are: - Separate buses can be tied together without alarge increase in the fault duty on either bus during a fault the limiter maintain voltage level on the unfaulted bus. - The result in low system impedance and good voltage regulation. - Excess capacity of each bus is available to both buses, thus making better use of the transformer rating. - By closing system ties through FCL user may improve switching flexibility, better balance transformer load and operation of electric arc furnace with less system voltage sag.

ISSN: 2088-6578

Serial Reactor, [5] Voltage : 30 KV Phase :1 Rated of Impedance : 1,0 OHM Measured Impedance : 1,1 OHM Frequency : 50 HZ Continuous Current : 2500 AMPERE Temp. Rise Class : 155 Thermal : 13,8 KA 1 SEC Mech. Peak : 35,2 KA Volt Drop. : 2,5 KV BIL : 170 KV Short Circuit before Reactor 41 KA/1SEC, ,104,55 KA dynamic peak.

The illustration wave form of fault current with and without fault current limiter shown below, These specification of serial rector as result of join elaboration with manufacture as reference in engineering, procurement and construction of the FCL equipment on the improvement project. The installation of serial reactor on the 30 kV tie bus of BSP and SSP I as protection of network shown on the figure below,

Fig.6 The wave form of fault current which it with and without current limiter [3] The rating of serial reactor determined by the calculation and simulation which it supported by software ETAP Powerstation. In calculation of the short circuit analysis current value of full load took in the normal and fault condition of network. The value of serial reactor represented as poor impedance which is inserted on the tie bus network. The short circuit analysis should be done to looked for the capability of network system when the symmetrical three phase of fault occurring on the network in 0,5 cycle momentary duty and in 4 cycle interrupting duty. In these simulations included the tied bus position on closed mean interconnection network and opened mean independent network.

Fig.7 The view serial reactor’s installation in the steel plant [5] IV. THE OPERATIONAL OF STEEL PLANT AFTER SERIAL REACTOR IMPLEMENTATION The improvement on medium voltage 30 kV of the electrical network has done especially on the protection system of tie bus between billet steel plant (BSP) and slab steel plant I (SSP I) of PT KS. These activities in order to maximize the utilization of transformer capacity in power grid.

As additions when simulating the interconnection network faulted current fill in the algorithm shall took the highest value in order the most safe condition. The value of serial reactor shall choosed the lowest in order the power loss is minimal. The result of calculation and simulation to looked for the rating of serial reactor which is inserted on the tie bus, supported by ETAP Powerstation ver 4 is,

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Fig.8 The arrangement of FCL on 30 kV tie bus [5]

91

ISSN: 2088-6578

Yogyakarta, 12 July 2012

The serial reactor which is basically as inductance component, that is 1 Ohm as value equivalency on engineering by ETAP Powerstation’s simulation and the measured value is 1,1 Ohm. It is new protection unit where installed between bus bar of BSP and SSP I of steel plant and in parallel with existing current limiter device. The new arrangement of FCL comprised a serial reactor and a current limiter devices which it dedicated to limit the overshoot current during a fault and in order to increase the protection’s performance. Basically the FCL will sensing the rate of increasing fault current as based to work. These improvement is more efficient than the replacing of the relays, fuses and circuit breakers on the tie network to increase the protection capacity of buses against the fault current. Moreover the devices have respond time so slowly against the highest fault current in 0,5 first cycle which reach 4 until 5 times the full load. The sequence as illustration of protection was working in responding the short circuit occurred on the 30 kV network system shown below.

Fig.9 The coordination of 30 kV protection system during fault. [5] Based on the result of simulations, when the fault occurred then the maximum of fault current is 49 kAmp which it rise in the network on 0,5 cycle was limited by serial reactor until up to 31,5 kAmp as the current limiter’s rating. The excess of fault current on the next period until 4 cycle, controlled by current limiter then MCB would shut off the switch ignited by relay and to shut down the fault network safely. The reliability of 30 kV bus network increased 0,64% after improvement with the new arrangement. The network became robust against the fault from other plant and the power energy capacity of bus was increased. When operation, the excess capacity of BSP bus (AM 30 kV) can be utilized by SSP I and vice versa, the excess capacity of SSP I bus (AN 30kV) can be utilized by BSP. The potential energy power of the BSP bus determined by the effective capacity of BSP bus is 128 MVA and based on maximum load data when operation of plant consumed 127 MVA (the operation is EAF I and III or IV) so the spare capacity is 1 MVA can utilized by SSP I. When operation of BSP consumed 117 MVA (the

92

CITEE 2012

operation is EAF II and I or III or IV) so the spare capacity is 11 MVA (8,5 % of BSP power’s capacity) can utilized by SSP I to supply 5,7% of full load. On the bus of SSP I have the effective capacity is 192 MVA, when operation of plant is 2 (two) units of EAF and 2 (two) unit of LF will consumed 192 MVA, so no spare of energy power in this case. When operation of plant is 2 (two) units of EAF and 1 (one) unit of LF will consumed 162 MVA, so the spare capacity is 30 MVA (15,6% of SSP I power’s capacity) can utilized by BSP to supply 23,6% of full load. The spare of energy power depend on the mode of operation of the twice plant BSP and SSP I were connected by the 30 kV tie bus. Based on availability of spare energy power so the plants able maximized the production with tie bus connected and supported by the protection circuits from the faults.

V.

CONCLUSION

The implementation of serial reactor able maximize the transformer capacity utilization in order increase the electrical power supply from tie bus to increase the production of steel plant. The production of billet steel plant (BSP) and slab steel plant (SSP) were increased up to 15% on average than before improvement. The FCL comprise serial reactor and current limiter dedicated to limit the overshoot current during the fault especially during the first 0,5 cycle around 49 kAmpere reduced by FCL became 31,5 kAmpere. Reducing the overshoot current target as the circuit breaker rating to switch-off of tie network in order to increase the protection performance. The reliability of 30 kV electrical network included tie bus after improvement increased up to 0,65%.

REFERENCES [1] Glover, Duncan J / Sarma, Mulukutia, Power system Analysis and Design with personal computer applications, PWS-KENT Publishing Company, Boston, 1987. [2] Anderson, PM, Power System Protection, Mc Graw Hill and IEEE Press, New York, 1999. [3] Lahaye D, Cvoric D, Haan De SWH, Ferreira JA; Field Circuit Coupling Applied to Inductive Fault Current Limiters; Conference; Hannover; 2008 . [4] Kunde K, Kleimaier M, Klingbeil L; Integration of Fast Acting Electronic Fault Current Limiters (EFCL) in Medium Voltage Systems; CIRED; Intl. Conference on Electricity Distribution; Barcelona; 2003. [5] Haryanta, Sofwan A, Koordinasi Sistem Proteksi antara Reaktor Seri dan Pembatas Arus, SNPPTI Mercu Buana Jakarta, Februari 2010.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Modelling and Stability Analysis of Induction Motor Driven Ship Propulsion Supari, Titik Nurhayati, Andi Kurniawan Nugroho Electrical Engineering Department – Semarang University (USM) Semarang 50274, Indonesia

e-mail : [email protected] I Made Yulistya Negara, Mochamad Ashari Electrical Engineering Department – Institut Technology of Sepuluh Nopember (ITS) Surabaya 60111, Indonesia

e-mail :[email protected]ts.ac.id; [email protected] Abstract—This paper deals with modelling and stability analysis of an electric marine propulsion system. The electric marine propulsion system consists of a fixed pitch propeller (FPP) driven by a three phase induction motor to generate ship thrust. Electric energy consumed by the induction motor is regulated by power electronic converter operated under field oriented control (FOC). Coefficients of Proportional-Integral (PI) controllers are selected to build currents and speed control feed back systems. The studied electric marine propulsion engine is a stable system with gain margin (GM) of 30dB, phase margin (PM) of 80 degrees and available additional gain factor of 34. Keywords-speed control; induction motor; stability; ship propulsion

I.

INTRODUCTION

Motion control systems for ships consist of three levels, i.e. guidance and navigation, high level motion control and low level local control [1]. The low level local control is a controller for local thrusters or propulsion engine. For ships, most of motion control researches focused on high level control. Low level local propulsion controls have been received less attention. However, in the last decade, local propulsion engine controllers have received more attention [1]-[6]. Recently, electric marine thrusters engines become the most interesting ship propulsion engine to be developed under the constraints of energy efficiency and environment issues. In normal condition, low level thrusters controller is developed to track the desired thrust. In FPP with unknown actual thrust, marine propeller thrust is achieved through various strategies. Some researchers developed marine propeller thrust controller based on shaft speed feed back control [7], torque feed forward control, mechanical power feed back control [3] or their combinations [1]-[2]. From the above mentioned strategies, marine thrust controller based on shaft speed feed back control is the industrial standard. II.

induction motor and shaft dynamics, and propeller hydrodynamics. The commanded propeller shaft speed was computed using a reference signal generator G based on the desired propeller thrust. Field oriented controller (FOC) or vector controller is normally applied to achieve high performance dynamics response of the induction motor drives system. The two applicable methods are direct rotor field oriented controller (DRFOC) and indirect rotor field oriented controller (IRFOC). IRFOC is known as the simpler method than DRFOC since it requires only position of rotor flux vector angle and it does not require rotor flux vector amplitude calculation nor rotor flux feed back controller as they are absolutely required in DRFOC [9]. A. Propeller Hydrodynamics Since propellers are the main devices in ship thrusts production, propulsion controls become very important for a whole ship motion control system. Dynamics of ship propulsion engine depends on propeller load and its interaction with the hull and sea water environment. Dynamics modelling of ship propulsion engine becomes complicated because its difficulties in developing its full scale model. Usually, the applied modelling solution was by combining analytic and empiric methods with its simplified model. Actual propeller thrust and torque can be influenced by many parameters, mainly propeller geometry, load and submergence. Propeller load depends on its speed and pitch ratio. Former researcher models propeller thrust and torque for shaft speed controlled FPP system at low advance speed as a function of propeller shaft speed, thrusters parameters (propeller diameter, geometry and position), and time varying propeller state variables (pitch ratio, advance speed and submergence) [2]-[3], [9].

ELECTRIC SHIP PROPULSION

For an electric marine propulsion engine, a fixed pitch propeller (FPP) may driven by a three phase induction motor to generate ship thrust. Electric energy consumed by the induction motor may regulate by a power electronic converter. Configuration of marine propeller thrust controller utilizing FPP with vector controlled induction motor is shown in Fig.1. The controller constructs of a reference generator with inertia and friction compensators,

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 1. Marine propeller thrust controller using vector controlled induction motor

93

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Marine propeller dynamic load have been modelled based on experimental observation [10]. This model especially used for propeller working on zero advance speed, dynamic positioning control and under water robots. However, this model is less accurate in relatively high advance speed. In other version, marine propeller dynamic load have been modelled to operate in all four quadrants considering axial water flow together with rotational speed and inertia of water. Model parameters have been selected based on an experimental data of thrust, torque and water flow [11] and utilizing torque loss estimator [4]. Shaft torque Tp and thrust FT produced by a marine propeller can be expressed as follows:

T p = β QV sign(n a ) K Q ρD n 5

(1)

2 a

FT = β TV sign(n a ) K T ρD 4 n a2 J

(2)

d ω = Tem − T p − T f dt

T f = sign (ω )Ts + k f 1ω x + k f 2

(3)

(4)

(5)

where Va is advance speed, i.e. water flow speed at propeller. The relationships between advance number and KQ and KT are termed as open-water propeller characteristics. Characteristics of standard open-water propeller for un-ducting propeller P1362 determined based on measurements in a towing tank in Marine Cybernetics Norwegian University of Science and Technology (NTNU) Laboratory can be found in [12]. Curves of KQ and KT can be used to analyse propeller working in all four quadrant operation plane constructed of advance speed axis and propeller shaft speed axis. Torque and thrust losses coefficients depend on propeller operating condition. A propeller may experience losses in its thrust production process due to environment influence, ship motion, and propeller-hull interaction. For example, high loaded propeller operated near sea water surface may cause ventilation, where low pressure at the propeller draw air or gas from free surface. Ventilation can cause 80% of propeller thrust loss [12]. In addition, an extreme sea condition may cause the propeller in-and-out of water. This phenomenon can imply high torque and

94

B.

Vector Controlled Induction Motor Dynamics Induction motor can be modelled using d-q axis reference frame rotating with the same speed of phase current frequency (ω = ωe) and it can be expressed in state variable as follow [9], [13]: ⎡ A ⎡ ids ⎤ ⎢− ω ⎢ ⎥ ⎢ e d ⎢ iqs ⎥ ⎢ Lm = dt ⎢λdr ⎥ ⎢ τ r ⎢ ⎥ ⎢ ⎢⎣λqr ⎥⎦ ⎢ 0 ⎢⎣

ωe

B

A

C 1 −

0 Lm

τr

τr

− (ωe − ωr )

−C

⎤ ⎥ ⎡ ids ⎤ ⎡ E 0 ⎤ ⎥⎢ ⎥ ⎢ i 0 E ⎥⎥ ⎡vds ⎤ (ωe − ωr )⎥⎥ ⎢ qs ⎥ + ⎢ ⎢λdr ⎥ ⎢ 0 0 ⎥ ⎢⎣vqs ⎥⎦ 1 ⎥⎥ ⎢⎢λqr ⎥⎥ ⎢ 0 0 ⎥ ⎦ ⎣ ⎦ ⎣ − τ r ⎥⎦ B

(6)

where,

L L R , Rs Rr (1− σ ) , L ω , 1 , τr = r − B = m r2 C = − m r E = Rr Lsσ Lrσ σLs Lr Lsσ σLs Lr

2 and σ = 1 − Lm .

Ls Lr

where KQ and KT are torque and thrust coefficients, βQV and βTV are torque and thrust losses, ρ and D are water density and diameter of propeller, ω and na=ω/2π are propeller shaft speeds in rad/sec and rev/sec, J, x, kf1 and kf2 are constants, Tf and Ts are mechanical friction and static torques. For fully submerged propeller, torque and thrust coefficients are determined through experiments in open water. For specific propeller geometry, torque and thrust coefficients are generally expressed as an advance number, Ja 2πVa V Ja = a = na D ωD

thrust losses, while the propulsion engine remains consume energy from ship fuel. This leads to a waste of energy and decreasing the efficiency of marine propulsion system. Marine propeller control systems for normal and extreme conditions have been introduced to limit the fluctuation of power consumption and decrease propeller wear and tear [1], [9].

A=−

d ω dt

CITEE 2012

The electromagnetic torque, Tem produced on the induction motor shaft can be expressed [13]:

Tem =

3 p Lm (λdr iqs − λqr ids ) 2 2 Lr

Tem − TL =

kf J d ωr + ωr p dt p

(7)

(8)

where, p is number of pole, J is shaft moment of inertia, TL is load torque and kf is friction constant. In IRFOC, the d-q axis reference frame can be assume so that the d axis always in line with the actual rotor flux vector, λr and the q-component of rotor flux vector, λqr is always zero [9]. Equation (6) and (7) can be written as follows: ⎡ ⎤ B ⎥ ⎡ ids ⎤ ⎡ E 0 ⎤ ⎡ ids ⎤ ⎢ A ωe ⎡vds ⎤ d⎢ ⎥ ⎢ ⎥ iqs = − ωe A C ⎥ ⎢⎢ iqs ⎥⎥ + ⎢⎢ 0 E ⎥⎥ ⎢ ⎥ dt ⎢ ⎥ ⎢ L ⎣vqs ⎦ 1⎥ ⎣⎢λdr ⎦⎥ ⎢⎢ m 0 − ⎥ ⎣⎢λdr ⎦⎥ ⎣⎢ 0 0 ⎦⎥ τr ⎦ ⎣ τr

Tem =

3 pLm λdr iqs 2 2Lr

(9)

(10)

From the last row of (6), slip frequency ωs can be defined as follow:

ω s = ωe − ωr =

Rr iqs Lr ids

(11)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

C. Propeller Shaft Dynamics For ships, propeller shaft dynamics can be viewed from the induction motor shaft and expressed as follows: Tem = J eff

(a) coupled

I ds ( s ) sτ r E + E = 2 Vds ( s ) s τ r + s (1 − Aτ r ) − ( A + BLm )

(12)

In ship propulsion engine, the induction motor can be fed by a current controlled voltage source inverter (CCVSI). Block diagram of CCVSI fed induction motor with proportional-integral compensator is shown in Fig.3. Stator current components can be expressed as follows: s 2 K pτ r E + s (K iτ r E + K p E ) + K i E I ds ( s ) = 3 2 I ds * ( s ) s τ r + s (1 − Aτ r + K pτ r E ) + s (K iτ r E + K p E − A − BLm ) + K i E

(13)

[s τ 2

I qs ( s ) =

r

(16)

k feff = k fm + k G2 k fp

(17)

(

Based on (9), induction motor can be visualized as Laplace transform block diagram as shown in Fig.2a. The d-axis component and q-axis component of stator current are coupled with gain of ωe. At steady state, when the speed of reference frame reaches the stator current frequency, the stator current frequency will zero and a frequency decoupling condition between d-axis and q-axis components of stator current is achieved as shown in Fig.2b. At this decoupled condition, the open loop transfer function (OLTF) between stator current and stator voltage of their d-axis components can be derived as follow:

]

+ s (τ r K i E + K p E ) + K i E I qs * ( s ) + sLm CI ds ( s )

(15)

J eff = J m + k G2 J p

T Leff = k G Ts + k Lp k G2 ω m2

(b) decoupled

Figure 2. IRFOC controlled induction motor: (a) coupled (b) decoupled.

G (s) =

d ω m + k feff ω m + TLeff dt

)

(18)

At constant rotor flux λdr, by equating (10) and (15), the induction motor shaft speed can be expressed as follow: Ω m ( s) =

(sJ

eff

1 + k feff

⎛ 3 pLm λ dr ⎞ ⎜⎜ I qs ( s ) − TLeff ( s ) ⎟⎟ ) ⎝ 4 Lr ⎠

(19)

Equation (19) shows that in ship propeller drives, the induction motor shaft speed depends on q-axis stator current component and the effective load of induction motor. D. Ship Propulsion Engine Dynamics Based on (13)-(19), OLTF of ship propulsion engine can be expressed in block diagram as shown in Fig.4. At constant rotor flux, closed loop transfer function (CLTF) of thrust controller system for ship based on propeller shaft speed control is shown in Fig.5. Block G2 is a function of propeller thrust and thrust loss coefficients, water density, and propeller diameter. It is a reference signal generator, pre-calculated compensator and acts as a feed forward converter from thrust reference signal to shaft speed reference signal [2]. In propeller shaft speed control, Laplace transform of shaft speed error system can be defined as follow: E rr ( s) = Ω p * ( s) − Ω p (s)

(20)

s 3τ r + s 2 (τ r K p E − τ r A + 1) + s (τ r K i E + K p E − A ) + K i E

(14)

Figure 4. Open loop transfer function for ship propulsion engine.

Figure 3. Current controlled induction motor.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 5. Thrust controller system for ship based on propeller shaft speed control.

95

ISSN: 2088-6578

Yogyakarta, 12 July 2012

The steady-state error (SSE) can be determined by using final value theorem and it is found that the drive system will always able to track the desired reference shaft speed since the SSE is equal to zero.

The OLTF and CLTF can be defined as follows: Ω p (s)

Δ Gc(s )G ( s )

Err ( s )

=

{

3 pLm λdr E s 2 K pω K p + s (K pω K i + K iω K p ) + K iω K i

[

]

}

⎧⎪s 4 J eff + s 3 J eff (K p E − A) + k feff ⎫⎪ 4 Lr ⎨ ⎬ 2 + s J eff K i E + k feff (K p E − A) + sk feff K i E ⎪⎭ ⎪⎩

[

]

(21) Ω p (s) Ω p * (s)

= =

Gc ( s )G ( s ) 1 + Gc ( s )G ( s )

{ } ( ) s J 4 L + s 4 L [J K E − A + k ] + s {4 L [J K E + k (K E − A)] + 3 pL λ EK K } + s (4 L k K E + 3 pL λ EK K + 3 pL λ EK K ) 3 pLmλdr E s 2 K pω K p + s (K pω K i + K iω K p ) + K iω K i

4

3

eff

r

r

eff

p

feff

2

r

eff

i

r

feff

feff

p



m dr

i



m dr

i

p



m dr

p

+ 3 pLmλdr EK iω K i

(22) The actual propeller shaft speed depends on the desired propeller shaft speed Ωp*(s) and the effectie load torque TLeff(s) and it can be expressed as follow:

[

]

3 pLm λ dr E s 2 K pω K p + s (K pω K i + K iω K p ) + K iω K i Ω p * ( s ) Ω p ( s) =

[

[

[

]

− k G 4 Lr s s 2 + s (K p E − A) + K i E TLeff ( s )

]

s 4 J eff 4 Lr + s 3 4 Lr J eff (K p E − A) + k feff

]

+ s 2 {4 Lr J eff K i E + k feff (K p E − A) + 3 pLm λ dr EK pω K p }

+ s (4 Lr k feff K i E + 3 pLm λ dr EK pω K i + 3 pLm λ dr EK iω K p ) + 3 pLm λ dr EK iω K i

(23) The propeller shaft speed error system can be determined as follow:

[

]

⎧⎪s 4 J eff 4 Lr + s 3 4 Lr J eff (K p E − A) + k feff ⎫⎪ ⎨ ⎬ Ω p * (s) 2 ( ) + s 4 L J K E + k K E − A + s 4 L k K E ⎪⎩ r eff i feff p r feff i ⎪ ⎭ + k G 4 Lr s s 2 + s (K p E − A) + K i E TLeff ( s )

[

]

III.

SIMULATION SETUP AND RESULTS

To quantitatively analyze the behaviour of the system, parameters used in the marine electric propulsion system are summarized in Table I. Constants for proportional and integral compensators are determined by search method in MATLAB environment. To give system response with fast rise time without overshoot, Kp=1, Ki=500, Kpω=0.075 and Kiω=5e-4 are selected. Based on system OLTF and CLTF, frequency responses of the system are shown in Fig.6. At low frequency less than 50 rad/sec, OLTF gives system gain greater than or equal to 0dB and phase shift of 90 degrees lag. At high frequency greater than 500 rad/sec, proportionally, system OLTF gives system gain decrement with a slope of 40dB per decade and phase shift 180 degrees. The frequency response shows the control system has gain margin (GM) of 30dB and phase margin (PM) of 80 degrees. These positive GM and PM indicate that the studied system is stable. System stability can be observed through roots locus of the characteristic equation as shown in Fig.7. It can be found that all four poles are negative and it indicates that system is stable. In practice, the discrepancy between predicted performance based on numerical model and the actual physical model performance may occur. The discrepancy may increase with respect to the operation time due to the parameter deterioration and normal aging of the devices. The discrepancy implies in the system gain factor, K and shifts system poles locations as shown in Fig.7. TABLE I.

ELECTRIC PROPULSION SYSTEM PARAMETERS Value

Variable

Value

Motor power

5 kW

Volt/Hz

250 V/50 Hz

Speed Rs Rr Ls

1440 rpm 0.877Ω 1.47Ω 165.142 mH

D Z Jp Ts

0.25 m 4 0.003 kgm2 0.25 Nm

(24)

Lr

165.142 mH

kf1

9.12e-5 Ns/radm

Propeller shaft speed reference signal may fluctuate due to control process to increase or decrease the ship speed. In addition, the actual propeller load may experience fluctuation because of in-and-out of water operations due to an extreme condition of sea. A small perturbation on the desired propeller shaft speed or load torque signals will dynamically change the propeller speed error system as follow:

Lm Jm

160.8 mH 0.012 kgm2

kf2

8.3e-4 Ns/radm 1

[

s 4 J eff 4 Lr + s 3 4 Lr J eff (K p E − A) + k feff

[ + s (4 L k

]

]

]

+ s 2 {4 Lr J eff K i E + k feff (K p E − A) + 3 pLm λ dr EK pω K p } r

feff

K i E + 3 pLm λ dr EK pω K i + 3 pLm λ dr EK iω K p )

+ 3 pLm λ dr EK iω K i

[

ΔE rr ( s ) =

[

[

]

s 4 J eff 4 Lr + s 3 4 Lr J eff (K p E − A) + k feff

[

]

]

]

+ s {4 Lr J eff K i E + k feff (K p E − A) + 3 pLm λ dr EK pω K p } 2

γ Bode Diagram

50

]

⎫⎪ ⎧⎪s 4 J eff 4 Lr + s 3 4 Lr J eff (K p E − A) + k feff ⎬ ΔΩ p * ( s ) ⎨ 2 ⎪⎩ + s 4 Lr J eff K i E + k feff (K p E − A) + s 4 Lr k feff K i E ⎪⎭ + k G 4 Lr s s 2 + s (K p E − A) + K i E ΔTLeff ( s )

[

Variable

Magnitude (dB)

[

0

GM

Ω p ( s) Ω p (s) E rr ( s ) Ω p * ( s )

-50

-100 0 -45 Phase (deg)

E rr ( s ) =

-90

PM

-135 -180

+ s (4 Lr k feff K i E + 3 pLm λ dr EK pω K i + 3 pLm λ dr EK iω K p ) + 3 pLm λ dr EK iω K i

-225 0

10

1

10

2

10

3

10

4

10

Frequency (rad/sec)

(25)

96

CITEE 2012

Figure 6. Frequency response of electric marine propulsion system.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

IV.

Root Locus 800

Imaginary Axis

400 200

K=34

0 -200 -400 -600

-500

-400

-300

-200

-100

0

CONCLUSION AND FUTURE WORK

An electric marine propulsion system has been modelled and the stability has been analyzed. The selected system parameters give a stable system with gain margin of 30dB, phase margin 80 degrees and maximum gain factor of 34.

600

-800 -600

ISSN: 2088-6578

100

In this study, the electric marine propulsion system is numerically analyzed in MATLAB environment based on mathematical model approach. However, the discrepancy between predicted parameter based on numerical model and actual physical model may occur. For this reason, analysis of an electric marine propulsion system based on physical model will be concerned in the future work.

Real Axis

Figure 7. Roots locus of system characteristic equation.

It shows that the maximum gain factor is limited to 34. Greater gain factors will shift a pair of system poles to the right hand side of the s-plane and drive the control system become unstable. System response to the step function of reference propeller shaft speed is shown in Fig.8. The control system has fast dynamic response to track the desired input signal within 0.1 second. The controller response to the step function of propeller load torque is shown in Fig 9. This change of load torque is considered as a damped disturbance and will disappear along with increasing time.

ACKNOWLEDGMENT This work was (partially) supported by Competitive Grant (Hibah Bersaing) Foundation Scheme of Directorate General of Higher Education, Republic of Indonesia. REFERENCES [1]

[2]

[3]

Step Response 1

[4]

Amplitude

0.8

0.6

[5] 0.4

0.2

0

[6] 0

0.02

0.04

0.06

0.08

0.1

Time (sec)

Figure 8. Controller step response to the change of reference propeller shaft speed.

[7]

[8]

Step Response 0

[9]

-0.1 -0.2

[10]

Amplitude

-0.3 -0.4

[11]

-0.5 -0.6 -0.7 -0.8

[12] 0

100

200

300

400

500

600

700

800

900

Time (sec)

Figure 9. Controller step response to the change of propeller load torque.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[13]

Ø.N. Smogeli, “Control of Marine Propellers: From Normal to Extreme Conditions”, PhD Thesis, Department of Marine Technology, Norwegian University of Science and Technology (NTNU), Throndeim, Norway, 2006. A.J. Sorensen dan Ø.N. Smogeli, “Torque and power control of electrically driven marine propellers”, Control Engineering Practice, Vol.17, pp.1053-1064, 2009. A.J. Sorensen, A.K. Adnanes, T.I. Fossen dan J.P. Strand, “A new method of thruster control in positioning of ships based on power control”, Proc. of the 4th IFAC Conference on Manoeuvring and Control of Marine Craft (MCMC), Brijuni-Croatia, 1997. L. Pivano, T.A. Johansen dan Ø. N. Smogeli, “ A Four-Quadrant Thrust Estimation Scheme for Marine Propellers: Theory and Experiments”, IEEE Trans. on control systems technology, Vol.17, No.1, pp.215-226, 2009. Supari, Syafaruddin, I M.N. Yulistya, M. Ashari dan T. Hiyama, “RBFN based Efficiency Optimization Method of Induction Motor Utilized in Electrically Driven Marine Propellers”, IEEJ Trans. Industry Applications, Vol.131 No.1, pp.68-75, 2011. Supari, I M.N. Yulistya dan M. Ashari, “Thrust Control with Rotor Resistance Adaptation for Induction Motor Driven Marine Propellers”, IJAR Vol.3, No.3, Part.I, pp. 61-69, 2011. L. Pivano, T.A. Johansen, Ø.N. Smogeli dan T.I. Fossen, “Nonlinear thrust controller for marine propellers in four-quadrant operations”, Proc. of the American Control Conference, New York City, USA, pp.900-905, July 11-13, 2007. N. Mohan, “Advanced Electric Drives Analysis, control and modelling using Simulink”, Univ. of Minnesota, Mineapolis, 2001. Ø.N. Smogeli dan A.J. Sørensen, “Anti-spin Thruster Control for Ships”, IEEE Trans. On control systems technology, Vol.17, No.6, pp.1362-1375, 2009. M. Blanke, K.P. Lindegaard dan T.I. Fossen, “Dynamic model for thrust generation of marine propellers”, Proceedings of the 5th IFAC conference of manoeuvring and control of marine craft (MCMC’00), pp.363-368, 2000. R. Bachmayer, L.L. Whitcomb dan M.A. Grosenbaugh, “An Accurate Four-Quadrant Nonlinear Dynamical Model for Marine Thrusters: Theory and Experimental Validation”, IEEE Journal of Oceanic Engineering, Vol.25, No.1, Jan 2000. L. Pivano, ”Thrust Estimation and Control of Marine Propellers in Four-Quadrant Operations”, PhD Thesis, Department of Engineering Cybernetics, Norwegian University of Science and Technology (NTNU), Throndeim, Norway, 2008. P.C. Krause, “Analysis of electric machinery”, McGraw-Hill Book Co., New York, 1986.

97

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

A New Five-Level Current-Source PWM Inverter for Grid Connected Photovoltaics Suroso*, Hari Prasetijo*, Daru Tri Nugroho* and Toshihiko Noguchi** * Electrical Engineering Department, Jenderal Soedirman University, Purwokerto, Jawa Tengah 53122, Indonesia ** Electrical and Electronics Engineering Department, Shizuoka University, Hamamatsu, Shizuoka 432-8561, Japan

Abstract--This paper presents a novel circuit configuration of five-level current-source inverter (CSI) used for grid connected photovoltaics. In this topology, an H-bridge CSI is connected with DC current-module working to generate the intermediate level currents of five-level current waveform. Using the proposed topology, the switching power device count, and inductor conduction losses can be reduced. Moreover, in order to reduce the inductor size of the five-level CSI, chopper based DC current sources are presented. The Five-level CSI is tested for grid connected photovoltaic system through computer simulation using PSIM software. The results show that the inverter works properly to inject a sinusoidal current into power grid with less harmonics distortion and with unity power factor operation. Keywords--current-source connection, photovoltaics.

I.

inverter,

five-level,

grid

INTRODUCTION

In general, the inverter topologies can be classified into voltage source inverters (VSI) and their dual circuits, i.e., current source inverters (CSI) [1]. The VSI has a DC voltage power source and generates AC voltage waveforms to the load, while the CSI delivers AC current waveforms from a DC current source to the load. The latter features high capability of short-circuit protection because of its high impedance DC power source. Compared with the conventional two-level inverters, multilevel inverters have various advantages such as lower dv/dt or lower di/dt, and less distorted output waveforms resulting in reduction of output filter size [2]. In distributed power generation applications, as most renewable energy sources, such as photovoltaic system, deliver DC power, the generated power is fed into the grid through a grid connected inverter. Various international standards, like IEEE-1547, IEEE-929 and EN-61000-3-2, impose requirements on output power quality of the inverters, such as harmonic currents and total harmonics distortion (THD) of the output current [2], [3]. Multilevel CSI is a key solution to tackle such problems. Control of the grid connected CSI is comparatively simpler than VSI, as CSI can buffer the output from grid voltage fluctuation, generates a predetermined magnitude of the current to the grid and can achieve a high power factor operation [2], [3]. A grid connected CSI doesn’t need current minor loops to control the AC current, which is indispensable in the VSI. Its output current is less dictated by the grid voltage. Moreover, the discrete diodes connected in series

98

with the power switches to obtain unidirectional power switches used in CSI will be unnecessary in the very near future because reverse-blocking IGBTs are emerging [4]. A few topologies of the multilevel CSIs have been proposed by researchers and engineers. A conventional method to generate the multilevel current waveforms is by paralleling some H-Bridge CSIs as shown in Fig. 1 [5]-[7]. However, the requirement of isolated DC current sources, a large number of power switching devices are fatal drawbacks brought by this configuration. Another topology of the multilevel CSI was proposed by applying a multicell topology of the multilevel CSI (or a multi-rating inductor multilevel CSI [8]. This topology is shown in Fig. 2. Control complexity for balancing control of the intermediate level currents is a problem of this topology. Some control methods have been proposed for balancing control of the intermediate level currents in [9] and [10], but very large in size of the intermediate inductors (100 mH) are still used. These cumbersome inductors will be costly and limit the application of the inverter. Reference [10] presented the configuration of single-rating inductor cell multilevel CSI. The circuit configuration of a five-level single-rating inductor cell CSI is shown in Fig. 3. Both multicell and single-rating inductor cell topologies use very large intermediate inductors added in the inverter circuit to obtain the intermediate level currents instead of the smoothing inductor used for DC current source generation. These intermediate inductors will give additional losses to the inverter instead of losses caused by the main smoothing inductor and power devices. The more intermediate inductors will cause the lower efficiency of the CSI.

Fig. 1. Five-level paralleled H-Bridge CSI

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Q1 1

Fig. 2. Five-level multicell CSI

Fig. 3. Five-level single-rating inductor cell CSI

(a) (b) Fig. 4. (a) Proposed DC current-modul, (b) Typical output current waveform

Fig. 5. Proposed five-level CSI

In this paper, a new configuration of the five-level CSI used for grid connected photovoltaic (PV) system is presented. The operation performance of the proposed multilevel CSI during grid connected operation is examined through computer simulations. II. CIRCUIT CONFIGURATION AND PRINCIPLE OPERATION A. Operation Principle of Inverter Circuit Fig. 4 shows the configuration of a proposed DC current-modul and its operation principle. The

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

TABLE I SWITCH STATES OF FIVE-LEVEL CSI Q2 Q3 Q4 Q5 Output current 0 1 0 0 +I

1

0

1

0

1

1

0

0

1

1

+I/2 0

0 0

1 1

0 0

1 1

1 0

-I/2 -I

current-modul is composed by a DC current source, unidirectional power switch and a connecting diode. The newly proposed configuration of the five-level CSI is obtained by connecting the H-Bridge CSI and a single DC current-modul as shown in Fig. 5. The DC current-modul work generating the intermediate level currents for five-level output current waveform generation. The amplitudes of the parallel DC current sources in the proposed five-level CSI are I/2, which is half of the amplitude of the DC current source in the three-level H-Bridge CSI. Furthermore, all DC current sources are connected at the same point, which make the isolated DC current sources used in the conventional paralleled H-bridge five-level CSI are no longer necessary in this topology. The DC current source generation will be explained in detail in the next section. The switching state combinations required to generate a five-level current waveform are listed in TABLE I. The more detailed operation modes for five-level output current generation is shown in Figs. 6(a) to (e). The required five-level output current (+I, +I/2, 0, -I/2 and -I current-levels) are generated as follows: 1) Current level +I: Q2, Q4 and Q5 are turned off, while Q1 and Q3 are turned on, making the current +I flow to the load. The two DC current sources are summed up to fed the load. 2) Current level +I/2: Q2 and Q4 are turned off, while Q5 is turned on, making the current +I/2 flows to the load. The current-cell is circulating its current. 3) Current level 0: Q1, Q4, and Q5 are turned on, and Q2 and Q3 are turned off making the current loops for every DC current sources. No current flows to the load. 4) Current level –I/2: Q1 and Q3 are turned off, while Q2, Q4 and Q5 are turned on, making the current –I/2 flows to the load. 5) Current level -I: Q1, Q3 and Q5 are turned off, while Q2 and Q4 are turned on, making the current –I flows to the load. B. DC Current Source Circuit In the proposed multilevel CSI, the DC current sources are indispensable. The DC current source is obtained by employing a DC voltage source fed chopper as shown in Figs. 7 and 8. The chopper works as a regulated DC current source for the inverter and the current-modul circuits. The chopper simply consists of a controlled power switch, a smoothing inductor and a free wheeling diode. The chopper switch functions regulating the DC current flowing through the smoothing inductor, and

99

ISSN: 2088-6578

Yogyakarta, 12 July 2012

reducing the smoothing inductor size, owing to the high-switching-frequency operation. Free-wheeling diode DF) is used to keep continuous current flowing through the smoothing inductor. Fig. 9 shows the configuration of a five-level CSI with chopper based DC current-sources. The inverter is connected to the power grid through power transformer as galvanic isolation between inverter and power grid. The transformer can also works to step-up the output voltage of the inverter. It should be noted that only a single DC voltage source (VPV) is connected to the choppers to obtain two DC current-sources. The DC voltage source in this paper is a photovoltaic system.

(a) Current level +I

CITEE 2012

(e) Current level –I Fig. 6. Five-level output current generation

Fig. 7. CSI with chopper based DC current source

(b) Current level +I/2 Fig. 8. Current-modul with chopper based DC current source

C. Current Controller and PWM Modulation Strategy

(c) Current level 0

(d) Current level –I/2

100

In the proposed five-level CSI, proportional integral (PI) regulators are independently applied to regulate the DC currents flowing through the smoothing inductors L1 and L2. The amplitude of the smoothing inductor current is 50 % of the peak value of the five-level current waveform. The switching gate signals of the chopper switches are generated by comparing the current error signals after passing through the PI regulator with a triangular waveform. These signals are used to adjust duty cycles of the chopper switches to achieve the balanced and stable DC current sources IL1 and IL2. In order to obtain a better output current waveform with low distortion, a pulse width modulation (PWM) technique is applied, instead of a staircase waveform operation. The staircase waveform can easily be obtained in terms of the fundamental switching frequency, so switching losses can be negligibly reduced. However, more distortion of the output waveform is generated and a larger filter is needed as a result.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

In this paper, a level-shifted multi-carrier based sinusoidal PWM technique is employed to generate the gate signals for the CSI power switches to obtain the PWM current waveforms. All carrier waveforms are in phase with each other at an identical frequency. The frequency of the modulated signal (a reference sinusoidal waveform) determines the fundamental frequency of the output current waveform, while the frequency of triangular carrier waves gives the switching frequency of the CSI power switches. Fig. 10 shows an overall control diagram of the proposed five-level CSI including the chopper and inverter circuit controllers. III. COMPUTER SIMULATION RESULTS In order to test the proper operation of the proposed multilevel CSI topology, the five-level CSI configurations shown in Fig. 9 is tested through computer simulations with a PSIM software. The test parameters are listed in TABLE II. Fig. 11 shows the computer simulation result of the proposed five-level CSI when the inverter is connected with a pure sinusoidal power grid voltage, where the five-level PWM current, the current injected into the power-grid (Iinv), the grid voltage (VGrid), and the current flowing through the smoothing inductors (IL1 and IL2) are presented. The five-level inverter works well injected a sinusoidal current into the power grid with unity power factor. The figure also shows the transient waveforms when the current injected by the inverter changed from 4 A to 8 A. The amplitudes of the smoothing inductor currents are well balanced for both smoothing inductors IL1, IL2 at 50 % of the output current peak value. Fig. 12 shows the harmonic spectra of the current injected by the inverter (Iinv). All of harmonic components are less than 1%. Fig. 13 shows the harmonic spectra the power grid voltage. Furthermore, Fig. 14 shows the computer simulation results when the inverter is connected to a distorted power grid. The harmonic spectra of the current injected by the inverter into the power grid (Iinv), and the power grid voltage (VGrid) are shown in Fig. 15 and Fig. 16, respectively. The simulation results show the proper operation of the proposed five-level CSI as a grid connected inverter.

Fig. 9. Chopper based DC current sources of the five-level CSI with power grid connection

Fig. 10. Control diagram of five-level CSI

IV. CONCLUSION In this paper a new circuit configuration of a five-level CSI applying an H-bridge and DC current-modul has been proposed. Using the proposed multilevel CSI, a low distortion of output current with fewer power switches, and smaller inductors has been achieved. The inverter is proposed to be used as grid-connected photovoltaic power conditioner. The proposed system has been tested through computer simulations. The results show the proper operation of the proposed five-level CSI as a grid connected inverter injecting a sinusoidal output current into the power grid with a unity power factor operation. REFERENCES

TABLE II TEST PARAMETERS

Smoothing inductors Power source voltage Grid voltage Switching frequency Filter capacitor Filter inductor Load Output current frequency Transformer ratio

1 mH 160 V 140 V 22 kHz 5 PF 1 mH R = 6.5 : , L =1.2 mH 60 Hz 1:1

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

J. Rodiguez, J. S. Lai, and F. Z. Peng, “Multilevel inverter: a survey of topologies, controls, and application,” IEEE Trans. on Industrial Electronics,” vol. 49, no. 4, p.p. 724-738, August 2002. [2] P. G. Barbosa, H. A. C. Braga, M. C. Barbosa, and E. C. Teixeria, “Boost current multilevel inverter and its application on single phase grid connected photovoltaic system,” IEEE Trans. on Power Electronic, vol. 21, no. 4, p.p. 1116-1124, July 2006. [3] R. T. H. Li, H. S. Chung and T. K. M. Chan, “An active modulation technique for single-phase grid connected CSI,” IEEE Trans. on Power Electronic, vol. 22, p.p. 1373-1380, July 2007. [4] C. Klumpner, and F. Blaajerg, “Using reverse blocking IGBTs in power converters for adjustable-speed drives,” IEEE Trans. on Inductry Applications, vol. 42, no. 3, p.p. 807-816, May/June 2006. [1]

101

ISSN: 2088-6578

[5]

[6]

[7]

[8]

[9]

[10]

[11]

Yogyakarta, 12 July 2012

Z H. Bai, Z. C. Zhang, “Conformation of multilevel current source converter topologies using the duality principle, IEEE Trans. on Power Electronic, vol. 23, p.p. 2260-2267, September 2008. S. Kwak, and H. A. Toliyat, “Multilevel converter topology using two types of current-source inverters,” IEEE Trans. on Inductry Applications, vol. 42, p.p. 1558-1564, November/December 2006. D. Xu, N.R. Zargari, B. Wu, J. Wiseman, B. Yuwen and S. Rizzo, “A medium voltage AC drive with parallel current source inverters for high power application, in Proc. of IEEE PESC2005, p.p. 2277-2283. F. L. M. Antunes, A. C. Braga, and I. Barbi, “Application of a generalized current Multilevel cell to current source inverters,” IEEE Trans. on Power Electronic,” vol. 46, no.1, p.p. 31-38, February 1999. J. Y. Bao, D. G. Holmes, Z. H. Bai, Z. C. Zhang and D. H. Xu, “PWM control of a 5-level single-phase current-source inverter with controlled intermediate DC link current, in Proc. of IEEE PESC2006, p.p. 1633-1638. B. P. McGrath, and D. G. Holmes, “Natural current Balancing of Multicell Current Source Inverter,” IEEE Trans. on Power Electronic, vol. 23, no. 3, p.p. 1239-1246, May 2008. B. Wu, High Power Converters and AC Drives, IEEE Press 2006, Chap. 10.

CITEE 2012

Fig. 12 Harmonic spectra of inverter’s output current

Fig. 13 Harmonic spectra of power grid voltage

Fig. 11. Simulation result of when the inverter is connected with a pure sinusoidal power grid voltage

102

Fig. 14. Simulation result of when the inverter is connected with a distorted power grid voltage

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Fig. 15 Harmonic spectra of the inverter’s output current

Fig. 16 Harmonic spectra of power grid voltage

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

103

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

OPTIMAL SCHEDULING OF HYBRID RENEWABLE GENERATION SYSTEM USING MIX INTEGER LINEAR PROGRAMMING Winasis1, Sarjiya2, T Haryono3 1

Department of Electrical Engineering, Jenderal Soedirman University Jl. Mayjen Sungkono KM 5 Purbalingga Central Java 2,3 Department of Electrical Engineering and Information Technology, Gadjah Mada University (UGM) Jl Grafika no.2 Kampus UGM, Yogyakarta 55281 email : [email protected], [email protected], [email protected]

Abstract — Integration of renewable energy generating units into thermal generation system is an alternative in supplying electrical energy to fulfill demand needs. However, integration of renewable energy technology, which is intermittent such as solar and wind power is faced on many technical and economic impact. One of the problems is how to optimize the generation in order to minimize operation cost while maximizing utilisation of renewable energy resource. This paper proposes a method on short-term scheduling of hybrid generation system consisting of thermal conventional units, photovoltaic systems, wind power, and battery as electrical storage. The objective is to minimize generation cost, i.e. fuel cost and start-up cost while satisfying system, thermal unit, renewable unit, and battery constraints with considering variability of load demand. Optimal scheduling of this hybrid thermal-renewable-battery system is formulated in Mix Integer Linear Programming (MILP) model and solved by Tomlab CPLEX based optimization software. Simulation on test system using 10 thermal generation units showed that this method can effectively solve scheduling problem. Comparing with other methods (LR, GA, and LRGA) using identical test system and scenario, scheduling with this method resulted on lower generation cost by 0.9%. Keywords- Optimal scheduling, hybrid generation system, renewable energy, Mix Integer Linear Programming

I.

INTRODUCTION

Implementation of electric generation with renewable energy source such as solar and wind power recently had a great attention as an alternative beside conventional units due to their advantageous. Except that they are widely available in nature, this kinds of energy can be obtained freely (no fuel cost needed) and also environmentally accepted. On the other hand, penetration amount of renewable energy into thermal generating system is faced on many technical, economical and also operational problem. One of this is the problem in obtaining optimal operation strategy in order to maximize the utilisation of renewable energy source and at the same time minimize operation cost by minimizing fuel usage. In this case, all generation units must be scheduled in such way to achieve the lowest cost possible. Scheduling the system should meet the demand need and satisfy all operation constraints include additional constraints related to renewable energy units.

104

Scheduling generating unit consist of two main related functions: Unit Committment (UC) and Economic Dispatch (ED). Unit Commitment (UC) is the problem of determining which generating units should operate on a scheduling period [1]. Committed units must meet the demand and also reserve requirement at minimum operational cost. While the Economic Dispatch (ED) determine the contribution of the output (dispatch) the power of each unit that works to serve the load at a time with reference to the power unit and system constraints to minimize the cost of production.[2]. In this paper, short-term generation scheduling of hybrid system combining thermal and renewable energy units are discussed. Many papers concerning in the shortterm generation scheduling with renewable energy was published. In the previous work [3], [4] Dynamic Programming based approach and Genetic Algorithm was used in to determine the minimum of the diesel fuel consumption in an autonomous system consisting diesel units, PV module, wind-park and battery. The short-term generation scheduling problem of PV grid connected with battery is presented in [5]. In this work, scheduling with constraints of battery capacity, minimum up/down time and ramp rates for thermal units, and solar PV capacity was solved by the Augmented Lagrangian Relaxation. In [6] Lagrangian Relaxation, Genetic Algorithm, and hybrid algorithm of Lagrangian Relaxation and Genetic Algorithm (LRGA) have been used to find the least operating cost of micro-grid consists of a PV system, wind-park, thermal units and battery bank. Scheduling of micro-grid with constraints include additional reserve requirement due to the uncertainty in the output of renewable resources is presented in [7] On the other hand, Mix-Integer Programming (MIP) model can solve thermal unit commitment problem accurately [8]. This method is recently more interesting because of the drastic improvements in commercial MIP solvers [9]. Some constraints can also be presented as integer or binary hence UC problem is suitable to be written in MIP. An example of MILP application on scheduling generation system with renewable energy source consist of: PV module, wind-park, fuel cell and battery is presented in [10]. In this work, only many kind of renewable energy unit were used and the system is not combined with thermal conventional units.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

This paper describes short-term generation scheduling of hybrid system combining thermal conventional generating units and renewable energy units including: PV system, wind-power plant and battery storage. Solving unit commitment and economic dispatch problem that is proposed in this methodology provide benefits in minimizing operation cost as least possible.The optimal scheduling problem of this hybrid renewable generating system is formulated using Mix-Integer Linear Programming (MILP) Model. II.

METODHOLOGY

A. Problem Formulation The scheduling objective is to minimize total generation cost including fuel cost and startup cost of thermal units within scheduling period and satisfy all operating constraints. The objective function of unit commitment problem can be formulated as

Figure 1. Piecewice linear of quadratic function J

f i ( Pgit )

f i ( Pgi ,m in )u it

F ji

t ji

j 1

T

N

t

f i ( Pg i )u it

min

SU it (1 u it 1 )u it

\

f i ( Pg i ,min ) = [a i + b i Pgi, min + c i (Pgi, min ) 2 ]

t 1 i 1

The index i in this paper represent number of thermal generating unit, while index t represent time (hour) stamp during the scheduling period (t = 1, 2, ..., T). Hence, the term fi(Pgit) and the second term SUit represent fuel cost and startup cost of thermal unit i at time t respectively. Meanwhile uit represent working (on/off) status of unit i at time t.

1,

u it

Startup cost, which is normally exponential, is approximated with stair-wise function with two discreet stairs value that represent hot start (HS) and cold start (CS) cost and is formulated as

SU

if unit is on

0, if unit is off

Fuel cost of thermal unit usually can be expressed as a quadratic function t

t

t

f i ( Pg i ) = [a i + b i Pg i + c i (Pg i ) 2 ]

for Tofft ,i forTofft ,i

HS i CS i

t i

Tcold ,i TMD,i Tcold ,i TMD,i

where Toff,i is continuously off time of unit i, Tcold,i is cold start time of unit i and TMD is minimum down time of unit i. B. Operation Constraints Solution of generation scheduling problem is subject to operational constraints 1)

which constant ai, bi and ci is quadratic curve coefficient of fuel cost function of thermal unit i related to its power generation (Pgi). This quadratic fuel cost piece-wise linear function represent linear gradient of segment while δji represent segment.

function can be linearized by as shown in Figure 1. Fji unit i quadratic function on j the power generated on each

Total generation of unit is summation of minimum power and each segment power and unit fuel cost is summation of fuel cost at minimum power and power on each segment multiplies by its gradient. These relations are then used J

Pgit

Pgi ,m inu it

t ji

Power balance equation Total generating power from thermal unit, renewable energy unit and battery must equal to load demand

Pg

t

Pg

t

PS

t

PW

t

PBd

t

PBc

t

PD

t

0

N

Pgit u it i 1

Which Pg is total thermal unit generation, PD is load demand, PS and PW is solar PV and wind power, then P Bc and PBd is battery charging and discharging power. 2) Renewable Energy and Battery Penetration Limit Penetration of renewable and battery units to the system are limited to maximum penetration level

j 1

Pst

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

PWt

PBdt

Ppenetration,max

105

ISSN: 2088-6578

3)

Yogyakarta, 12 July 2012

Spinning reserve requirements Spinning reserve constraint is represented as follow

CITEE 2012

based on solar irradiation and temperature. The maximum available wind power (PW,max) is also asummed can be predicted based on wind velocity data.

N

Pgi ,m axu it

PDt ,net

PSRt

6)

i 1

PDt ,net

PDt

PSRt

( PSt

PWt

PBdt

PBct )

R PDt

Available maximum power from active dispatch-able unit should greater than addition of net demand (PD,net) of thermal unit and spinning reserve requirement (PSR). In this study, renewable energy and battery unit are treated as negative load, so their generation power will subtract thermal unit demand as shown in equation (11). Reserve power requirement is determined as percentage of estimated demand. It is assumed that load is varied with variability R% from load estimation. 4)

Thermal unit constraints Real power generation of thermal units must be between their minimum power (Pgi,min) and maximum power (Pgi,max) limit due to technical condition as formulated in equation (10)

Pgi ,m in

Pgi

Pgi ,m ax

The thermal generating units also cannot be turned off or turned on immediately. Once a unit is committed there is a Minimum Up Time (MUT) before the unit can be shutdown. Otherwise, from off state condition the unit can be turned on only after reach its Minimum Down Time (MDT). These constraints can be formulated as follow

Toff ,i

MDT i

Ton ,i

MUT i

where Toff,i represents continuously off time of unit i and Ton,i is continuously operating time of unit i 5)

Renewable energy unit constrains Power from solar PV system and wind power plant are depend on weather condition and they are less than maximum potential power available.

PS

Ps , m ax

PW

PW , m ax

PS,max is maximum available power generation from PV array. It is assumed that the value can be well predicted

106

Battery constraints Charging condition of battery is stated in state of charge (SOC). The battery energy storage level is limited between minimal (SOCmin) and maximum (SOCmin) SOC value (Eq. 18) depend on its capacity and deep of discharge (DOD) permitted.

SOC t

SOC m in

SOC m ax

PBct

PBc,max X t

PBdt

PBd ,max Y t

PBct

B

PBdt

SOC t

Xt

SOC t

Yt

1

1,

1

SOC m ax

SOC m in

X ,Y

SOC 0

SOC 0

SOC T

SOC T

SOC t

SOC t

1

PBct

0,1

B

PBdt

In Equation 19 and 20 Charging power (PBc) and discharging power (PBd) are limited below it’s maximum charging (PBc,max) and maximum discharging (PBd,max) rate to ensure battery lifetime as design. Charging or discharging power also should not make SOC level raise exceed maximum or drop below minimum value as formulated in equation 21 and 22. Then, equation 23 show that battery cannot simultaneously charge and discharge at the same time. Intial SOC level (SOC0) and SOC at the end scheduling period (SOCT) are predetermined by dispatcher (see equation 24 and 25) Usually dispatcher hopes that SOC level at the end period will be same with initial SOC level. Finally, equation 26 represent energy balance in battery during charging or discharging cycle. III.

SIMULATION AND RESULTS

The scheduling objective and constraints as formulated above were implemented on hybrid micro-grid system [4] consist of 10 thermal units, 4x360kWp PV system, 4x140 kW wind-park, and 2500 kWh battery bank. Characteristic of thermal conventional units are shown in Table 1. Meanwhile table 2 show demand, PV and wind power prediction used in this simulation

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

TABLE I. Unit number

ISSN: 2088-6578

THERMAL UNITS CHARACTERSITIC

Pmin

Pmax

a

b

C

1 2 3 4 5 6 7 8 9 10

kW 100 100 100 100 50 100 100 50 50 50

Kw 600 600 400 400 300 300 200 200 100 100

$/h 5 5 20 20 30 30 40 40 55 55

$/kWh 4 6 8 10 10 12 14 16 15 17

$/(kW)2h 0,001 0,002 0,0025 0,0025 0,002 0,002 0,0015 0,0015 0,0012 0,0012

Unit number 1 2 3 4 5 6 7 8 9 10

MUT H 5 5 3 3 2 2 2 2 1 1

MDT h 5 5 3 3 2 2 2 2 1 1

HS $ 550 500 450 460 800 750 720 700 560 570

CS $ 1100 1000 900 920 1600 1500 1440 1400 1120 1140

Tcold $ 3 3 2 2 1 1 1 0 0 0

Figure 2. Scheduling result with thermal unit only`

Integrating renewable energy and battery unit will reduce thermal unit contribution (as shown in table 3 and figure 3) and more far reduce operational cost since solar and wind power cost is neglected. Appliying battery make generation cost decrease more. Battery can store electric energy at low load condition or when renewable energy production is high. Then, this storage energy will discharge when necessary to minimize thermal unit operation and it’s dispatch power.

Battery have storage capacity of 2500 kWh with charging and discharging rates is limited to 500 kWh a hour, charging efficiency is assumed 95% and DOD is 60%. It is also assumed that SOC level at the end of Scheduling period will be same with initial SOC level. In this work, initial SOC level is 1250 kWh. Finally, penetration of renewable unit and battery is limited to 1000 kW.

TABLE III.

SCHEDULING RENEWABLE AND BATTERY UNIT

RESULTS

INTEGRATING

Ps

Pw

PBd

PBc

t

G1

G2

G3

G4

(kW)

(kW)

(kW)

(kW)

1

600

400

100

0

0

44

0

44

2

600

433

100

0

0

76

0

9

Figure 2 show scheduling result when only thermal unit was operated. Active units except should meet the demand need, also must provide sufficient reserve to cover load variability. It is assumed that load is varied with load variability of 10%. Scheduling priority when only thermal unit is operated is influenced by unit fuel cost coefficients and startup cost.

3

600

568

100

0

0

132

0

0

4

600

568

100

0

0

96

236

0

5

600

600

100

0

136

200

64

0

6

600

548

100

0

308

344

0

0

7

600

422

100

0

368

516

0

6

8

600

433

100

0

672

336

0

41

9

600

600

100

0

916

236

0

152

TABLE II.

10

600

600

200

100

980

252

0

232

Pw (kW)

11

600

600

300

100

1176

144

0

320

12

600

600

400

100

1224

104

0

328

52 22 2,5 2 2 1,5 0 1,7 2,5 4,4 4,4 17,2

13

600

600

350

100

1200

52

0

252

14

600

600

300

100

1224

24

0

248

15

600

600

200

100

980

4

16

0

16

600

600

169

0

856

4

71

0

17

600

600

100

0

368

0

332

0

18

600

600

100

0

308

4

238

0

19

600

600

100

0

220

0

180

0

20

600

600

100

0

100

0

200

0

21

600

600

100

0

60

4

136

0

22

600

600

100

0

0

4

96

0

23

600

600

100

0

0

4

0

4

24

600

500

100

0

0

16

0

16

T 1 2 3 4 5 6 7 8 9 10 11 12

PD (kW) 1100 1200 1400 1600 1700 1900 2000 2100 2300 2500 2600 2700

DEMAND AND PV AND WIND POWER PREDICTION PS (kW) 0 0 0 0 135 306 367 673 917 980 1175 1224

Pw (kW) 45 77 130 97 201 342 516 335 238 252 142 102

t 13 14 15 16 17 18 19 20 21 22 23 24

PD (kW) 2650 2600 2500 2300 2000 1850 1700 1600 1500 1400 1300 1200

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

PS (kW) 1200 1224 979 857 367 306 220 98 61 0 0 0

Thermal unit power (kW)

WITH

107

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

unit, maximum available solar and wind power, and battery unit constraints. Objective function and constrains are formulated with Mix Integer Linear Programming model and solved with Tomlab CPLEX software. Simulation results show that MILP can solve the scheduling problem with better solution. Comparing with other methods (LR, GA, and LRGA) using identical test system and scenario, scheduling with this method results on lower generation cost by 0.9%. Generation cost of thermal unit is minimized while satisfying all prevailing constraints.

REFERENCES [1] Figure 3. Scheduling results with integrating renewable and battery unit TABLE IV.

SUMMARY OF GENERATING COST RESULT COMPARISON

Method

Thermal only Cost % ($/day) difference

Thermal-RE-battety Cost % ($/day) difference

MILP

351344

201184

LR

378890

7,3%

202940

0,9%

GA

379380

7,4%

202870

0,8%

LRGA

375840

6,5%

202940

0,9%

Finally table 4 show comparison of this method with other method with identical system and scheduling scenario [6]. In this work LR, GA and LRGA were used to solve scheduling problem with reserve power is determined based on load variability of 10%. It is shown that MILP give better result. IV.

CONCLUSION

This paper presents a method for generation scheduling of an autonomous system integrating renewable energy generating unit. The proposed method can be used for optimal short-term generating scheduling of a system including: thermal conventional units, renewable technology such as PV module and wind-park, and battery unit. The scheduling objectives is to minimize fuel cost and startup cost of thermal unit and meet all the system and units constrains. The system constraints considered in this study are: power balance equation, renewable energy penetration, and spinning reserve requirement. While, unit constrains include: thermal unit generation limits, minimum up and down time of thermal

108

Alkhalil Firas, Degobert Philippe, Colas Frédéric and Robyns Benoit, 2009. Fuel consumption optimization of a multimachines microgrid by secant method combined with IPPD table. International Conference on Renewable Energies and Power Quality (ICREPQ’09) [2] Dieu Vo Ngoc, Ongsakul Weerakorn, 2010. Economic Dispatch With Emission and Transmission Constraints by Augmented Lagrange Hopfield Network. Global Journal on Technology and Optimization, Volume 1 2010. [3] Bakirtzis A.G. and Dokopulos P.S., 1988. “Short Term Generation Scheduling in a Small Autonomous System with Unconventional Energy Sources,”IEEE Trans. on Power Systems, Vol. 3, No. 3, 1988, pp.1230-1236. [4] B. Lu and M. Shahidehpour, “Short-Term Scheduling of Battery in a Grid-Connected PV/Battery System,”IEEE Trans. on Power Systems, Vol. 20, No. 2, May 2005, pp. 1053-1061. [5] Hong Ying-Yi, Chiu Ching-Sheng, Li Chang-Ting, 2007. KW Scheduling in an Autonomous System. Power Tech 2007 IEEE Lausanne Conference Paper 1-5 July 2007, page(s): 1730 -1735 [6] Logenthiran, T. and Srinivasan Dipti. 2009. Short Term Generation Scheduling of a Microgrid. IEEE TENCON 2009. [7] Ahn Seon-Ju and Moon Seung-Il, 2009. Economic Scheduling of Distributed Generators in a Microgrid Considering Various Constraints. [8] Miguel Carrion and Jose M. Arroyo, “A Computationally Efficient Mixed-Integer Linear Formulation for the Thermal Unit Commitment Problem”, IEEE Trans. on Power System, vol. 21, no.3, pp 1371-1378, 2006. [9] Delarue Erik, Bekaert David, Belmans Ronnie, and D’haeseleer William, 2007. Development of a Comprehensive Electricity Generation Simulation Model Using a Mixed Integer Programming Approach. International Journal of Electrical, Computer, and Systems Engineering 1;2. 2007 [10] Khodr, H.M.et all. 2010. Optimal methodology for renewable energy dispatching in islanded operation. Transmission and Distribution Conference and Exposition, 2010 IEEE PES, page(s): 1 – 7.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Design and Implementation Human Machine Interface for Remote Monitoring of SCADA Connected Low-Cost Microhydro Power Plant Alief Rakhman Mukhtar, Suharyanto, and Eka Firmansyah Gadjah Mada University/EE & IT Department Jl. Grafika No. 2 Kampus UGM, 55281, Yogyakarta, Indonesia

[email protected] Abstract—Microhydro power plant is a green technology which can be an alternative solution in overcoming the world energy crisis. This technology is a non-depleting and non-polluting energy source that has been proven be able to provide reliable power in the past and will still one of the most promising renewable energy source in the future. Some microhydro systems those have been implemented in Indonesia were equipped with Supervisory Control and Data Acquisition (SCADA). This additional feature gives benefit to the power plant supervision process. With SCADA, supervision of the Microhydro power system can be done continuously. Therefore, any problems those might be appear can be detected as soon as possible and then handled appropriately while also maintaining performance parameters of the power plant. The monitoring and controlling implementation is not away from interactive design of Human Machine Interface. It helps the operator in controlling the plant decision and making correct. In this way, the paper presents an interface for SCADA to monitor and control parameters in the microhydro power plant and increasing quickens decision making by operator as well as minimizing human error.

controlling from the process of the plant. Communication system is used to send the data from supervisory system to the receiver, therefore the SCADA could be monitored. HMI is a software interface as a connector between the operator and the plant controlled.

Keywords-component; Human Machine SCADA; Microhydro Power Plant, LabVIEW

A. Human Machine Interface HMI is an interface software like GUI based on computer works as a connector between the operator with the machine or devices which control also work as supervisory [6] some references about HMI [6], [7] explained the general function of HMI. The function is described in the following lines.

I.

Interface;

INTRODUCTION

Microhydro power plant is one of the energy alternative resources in Indonesia. This energy is generally used, especially in rural area with abundance spring. Previously, the traditional system of the energy that helped the field operator work was quite hard since the operator has always been in the field to monitoring the condition. Recently, the new system is done by integrating with Supervisory Control and Data Acquisition (SCADA) to help the operator supervision since the monitoring has been done in the field, the integration system allows the operator to intervene the long distance control. Thus, this integration system slowly changes the traditional system. Many research and development in SCADA system has been conducted [1], [2], [3] also in some research which integrated SCADA system to microhydro power plant [4], [5]. This term show that the SCADA system has still developed widely in controlling and monitoring. The SCADA system has three main parts, they are: RTU (Remote Terminal Unit), communication system, and HMI (Human Machine Interface). RTU is the main

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

However, from previous researched there have not been any discussion about one of SCADA system focus on one of main parts. This research aim to develop Human Machine Interface (HMI) for the SCADA system using Graphical User Interface (GUI) technique with LabVIEW software form National Instrument specified on microhydro power plant. By inserting the microhydro power system required, to control and monitor, as well to consider the interface design. The result of this research is expected to work easier of the operator as well for faster decision maker, therefore could overcame the problem in the field. II.

LITERATURE REVIEW

Setting: determining the output condition (actuator) based on the input gain from the sensor reader. Monitoring: supervising the plant condition in real time. Plant conditions visualization is base on the result of output and input reader from the process on running plant. Take action: running and stopping plant process. Storing and logging data : storing and logging in a data collection. Generally could be data measured, a system represented by valves as actuator, alarm, day taken and data storage. Alarm summary and history: storing the alarm condition, so the system error could be detected.

109

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Trending: is a term for the graphic visual from a process, such as process visualizing in system temperature increasing and decreasing . trending could be online seen real time or historically. HMI and RTU are two ways communication. RTU controls the process of the system or the plant by supervising the input, controlling the output, and receiving the data request from the HMI to be responded by resending the required data to the RTU. Beside sending the data required to RTU, HMI also configures the plant input and output which will be overviewed, arranging the storing and logging data scheduled, executing, controlling and stopping the plant process if it is required. B. Evaluation Process of HMI The evaluation of the interface for microhydro power plant and provide a set of recommendations about graphical screen improvement is important. The field operator has the roles to control and monitor. The operator evaluates the interface of microhydro system. The result and propose a global evaluation index and interface improvement to the designer. The calculation used to evaluated the interface to determine is it valid or not by using validity. By comparing the proper value (r) and instrument by determine crisis value. Generally, r crisis is used to define the validity limit of an instrument which its value determined as = 0,3. r

Where (r) is proper value, (x) is respondent answer feedback, (n) is respondent total, (N) is item total. SYSTEM ARCHITHECTURE

A. Overal System The SCADA control and monitor system for microhydro power plant is depicted in figure 1. Server

RTU Voltage

Generator

Signal Conditioner

electric load connected dynamically changed. Another microcontrollers functioned to distributed the result of reading such required parameters from the sensors put from the reading result sent trough Wireless Sensor Network (WSN) to server. The result of the reading is shown to an interface where the design is specifically for microhydro power plant. This design is develop by LabVIEW software and management base data using My-SQL database. B. Proposed HMI for Microhydro Power Plant The device develop is HMI software which is required to support the controlling and monitoring on microhydro power. Therefore the HMI software develop has the function as follows: Setting: preparing devices for the process of communication system process such us setting port used, and time active setting of the power. Monitoring and controlling: monitoring parameters on the power like; water level, water flow, preasure of pim, temperature of generator, relay status, voltage, current, frequency and output energy. main while controlling parameter like on/off main circuit breaker and control valve. Alarm indicator: such as over current, under or over voltage, under or over frequency , and over temp. Trending: the graphic shown is a correlated graphic to power and time supply

X N .n

III.

CITEE 2012

Monitoring Power Analyzer dsPIC30F2020

Controller dsPIC30F2020

LCD Grapic

Switch Board 12-channel

Load simulator Incandescent Lamp

wireless

LabVIEW

ODBC Driver My SQL Database

Data storage: data stored in database is time, active time on the plant, output energy, voltage, current, frequency, temperature, and generator speed. Additional figure like login is user name and password for on duty operator, after what on the figure is like live animation in a movement part. Those function represent general function from HMI software such us setting monitoring, controlling, data storing, alarm history and summary and trending. IV.

RESULT AND DISCUSSION

A. HMI for Microhydro Power Plant Figure 2 shows the control on main menu which consist button which connecting user with the other forms like login, configuration, acquire data, open data file, help, and exit button.

Ballast Load

Figure 1. Block diagram of real time control and monitor of a microhydro power plant.

The figure above is the SCADA system on microhydro power plant wholly. Supporting device used is generator one phase with the specification 220 V, 50 Hz, 3 kW, ballast load connected to 12 relays. Monitor and controller unit system on RTU develop by two microcontrollers. The function is maintaining voltage on generator in order stable condition even the

110

Figure 2. Main menu visual interface of microhydro power plant.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

To start monitoring process, enter user name and password in login button to begin monitoring process, Once the process has completed, the indicator on the main menu will turn on and other forms are accessible. The next process takes place in configuration form. It is the form with dual mode, which functions to simulate and to connect to RTU as illustrated in figure 3 below.

Figure 5. Process menu on acquire data form

Figure 3. Simulation mode in configuration form

The simulation provides the operator with the course of the process in a plant as represented by a simulation. The option setting of the simulation mode is used to activate the simulation mode. And then, the power parameter is used to find out the power output the generator can produce by entering the existing parameter 2. If we do not know the value of the parameter 2, operator can directly enter the resulting power value of the generator and does need to enter other parameters by activating the power input in the option setting.

The first form is process which functions visualization the process of electric on the microhydro power plant in mimic diagram form and the icon is based on device in the field and the changing of the color shown the condition such as minimum or maximum limit on/off condition, visualization the parameters information which is monitored and controlled on the process of microhydro electric power plant in analog and digital forms, and the status of process, connected to RTU or used for simulation mode. Then the next form is energy meter as illustrated in figure 6 below.

At the time when the simulation mode is activated, the RTU connection mode does not function as illustrated in the figure 4.

Figure 6. Energy meter menu on acquire data form

Figure 4. Connected RTU mode in configuration form

The RTU connection mode connects the communication between interface and the existing equipments. There are some adjustments in the setting option such as port number used, bound rate, data bits, stop bits and parity. Other settings of the configuration are for reporting in excel format and data storing in MySQL database. The data collecting process takes place in acquire data form in which complete figure of the monitoring of the microhydro electric plant will be displayed. The form is illustrated in the figure 5 below.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

This visual interface from is used to visualization the electric energy output condition on the microhydro power plant, informing the correlation graphic between the output energy the generator and the active time off the plant, informing either the system is connected to dummy load and real load which is captured in relay on/off and the diferent color shown if it is connected or not, and indicators which is indicated either the relays and dummy load active or not indicated by the change of the color. During the monitoring and controlling process, the data is stored in My-SQL database and the reading results are displayed in the report form. The next process is preparing report as illustrated in figure 7 below.

111

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

supporting and as the measurement either HMI is proper to be operated. The result of the evaluation shown that the increase to some of the assessment aspects after it is fit with the consumer. REFERENCES [1] Vyacheslav Kharchenko, Anatoliy Gorbenko, Eugene Babeshko, "Applying F(I)MEA-technique for SCADA-based Industrial Control Systems Dependability Assessment and Ensuring," in Third International Conference on Dependability of Computer Systems DepCoS-RELCOMEX, 2008. Figure 7. Report on view data file form

The output of the My-SQL database is displayed in the form view data file. The displayed data is the one day data as indicated by the date of the process and displayed in excel format. Subsequently, other feature of the visual interface is help form. It functions as the indicator of the use and the operation of the interface to enable the operator easy operation. B. Evaluation result Respondent in here is the field operator, because they are user from this interface. From three thin users give respond about this interface. The responds are: the functions expected and required to monitor and to control the microhydro power is visualized, the accuracy of measurement and the respond from the interface to the speed of device, visualization and design on interface is simple, and understandable, The navigation system on this interface is simple, no difficulties found during its used, the using of color icon used is properly, visualized graph helped in supervising the change of power load to the microhydro power plant, and the report form as daily report and evaluation is completed based on the required parameters. The result of the evaluation could be seen in table 1. TABLE I. No. 1 2 3 4 5

Questioner Functions and complement Design and visualization Navigation system Extra features User interest

[2] Alexandru Bitoleanu, Mihaela Popescu, Mircea Dobriceanu, "SCADA System for Monitoring Water Supply Networks," in WSEAS TRANSACTIONS on SYSTEMS, 2008. [3] Truong Dinh Chau and Phan Duy Anh, "Component-based Design for SCADA Architecture," International Journal of Control, Automation, and Systems, 2010. [4] D. Chiorean, I. Rogoz, C. Lehene, I. Stoian, and E. Stancel M. Ordean, "SCADA System-Suport for the Maintenance Management of Hydro Power Plants," , 2006. [5] L. Micleam, H Valea, I. Stoain and G. Toderean Sz. Enyedi, "Management of a Multi-Agent Society used for Monitoring and Diagnosis in a Hydroelectric Power Plant Chain," , 2006. [6] GlobalSpec Industrial Controls Software. (2010) "http://www.globalspec.com/LearnMore/Industrial_Engineering _Software/Industrial_Control_Software/Human_Machine_Interf ace_Software_HMI" [7] Juni Ardi Irawan. (2010) "http://juare97.wordpress.com"

HMI

or

MMI.

[8] M. Metzger and G Polakow, "A Study on Appropriate Plant Diagram Synthesis for User-Suited HMI in Operating Control," in IFIP International Federation for Information Processing, 2008. [9] Wilbert O. Galitz, The Essential Guide to User Interface Design An Introduction to GUI Design Principles and Techniques. Indiana, Canada: Wiley Publishing, Inc, 2007.

EVALUATION RESULT r

r critis

result

0,65

0,3

valid

0,66

0,3

valid

0,71 0,72 0,72

0,3 0,3 0,3

valid valid valid

The result of the evaluation shown that HMI is fixed and could be operated as the monitoring and controlling to SCADA system in microhydro power plant. V.

CONCLUSIONS

HMI software for SCADA in electric microhydro power plant has been developed, even though still lack of complexity expected, generally could be functioned as the standard of HMI software, like setting, monitoring and controlling, alarm indicator, reporting, trending, and data storing. By noting the regulation of style on the design of interface, the significance of HMI could be increased for its operation. The evaluation is the factor of

112

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Equivalent Salt Deposit Density and Flashover Voltage of Epoxy Polysiloxane Polymeric Insulator Material with Rice Husk Ash Filler in Tropical Climate Area Arif Jaya#1, Tumiran*2, Hamzah Berahim*3,Rochmadi#4 #1

Postgraduate Student in Department of Electrical Engineering and Information Technology, Gadjah Mada University #1 Department of Electrical Engineering, Moslem University of Indonesia, Makassar *2, *3 Department of Electrical Engineering and Information Technology, #4 Department of Chemical Engineering, Gadjah Mada University Address: Jln Grafika 2, Fakultas Teknik UGM, 55281Yogyakarta, Indonesia 1

[email protected], [email protected]

Abstract—The effects of natural aging upon the Equivalent Salt Deposit Density (ESDD) and Flashover Voltage (Vfo) of the polymeric insulator material made of epoxy resins, Polydimethylsiloxane (PDMS), and Rice Husk Ash (RHA) compound are presented in this paper. The samples of epoxy resin insulating material are based on Diglycidyl Ether of Bisphenol A (DGEBA), Metaphenylene Diamine (MPDA) as curing agent, polysiloxane, and 325 mesh RHA as filler. This research aims to observe the equivalent salt deposit density (ESDD) and Flashover Voltage (Vfo) on the surface insulating materials with variation RHA content that has undergone a natural aging. Experiment was carried out through the following procedure. The samples were installed on outdoor for natural aging test, located at electrical engineering and information technology of Gadjah Mada University in Indonesia, then ESDD and Vfo on the sample surface was measured every 2 weeks. The results show that ESDD and Vfo of insulator material are fluctuation during 52 weeks of experiment period. The

higher the content of RHA with PDMS as filler, the lower ESDD and the flashover voltage increases during the rainy season. Keywords- epoxy-polysiloxane, RHA, ESDD, flashover voltage

I. INTRODUCTION Epoxy resin is a very good example of polymeric material for high voltage insulator due to it's high value of dielectric strength (25-45 MV/m) [1], light weight, easy maintenance and can be easily adjusted by using additive material. The application of polymeric insulator material of epoxy resin bisfenol A with silica sand filler for outdoor insulator in normal air condition shows a promising performance, but the performance will be decreased after prolong instalation due to very high temperature, humidity and UV radiation which caused surface cracking[2][3]. The next generation is cast epoxy resin cycloaliphatic. It shows good performance under normal atmospheric condition service, but the performance will be decreased under polluted atmospheric condition[4][5]. The weakness of cast epoxy resin cycloaliphatic is the impurity of its filler, Alumina Trihydrate (ATH), which may consist of Natrium Oxide (Na2O) and Kalium Oxide (K2O). Those two alkali oxide along with water will form Alkali Hydroxil (NaOH and KOH), a strong electrolyte, which

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

can change dielectric property of cast epoxy resin cycloaliphatic [6]. An aging test to analyze Silicon Rubber (SiR) type polymeric polysiloxane insulator and Ethylene Propylen Diene Monomer (EPDM) insulator performance was held outside Anneberg Bulk Station, west coast Sweden by observing the relationship between surface condition and performance of polymeric insulation. Six SiR and three propylene diene monomer (EPDM) were analyzed by using Electron Spectroscopy for Chemical Analysis (ESCA), Attenuated Total Reflection Fourier Transform Infrared (ATR-FTIR) and Scanning Electron Microscopy (SEM). The results concluded the relationship between insulator material surface condition with hidropobicity property, leakage current and voltage resistance. High magnitude of leakage current and intensity of arc occured on EPDM insulator, meanwhile very low leakage current occured on SiR. Polysiloxane (i.e. SiR) is able to create hydrophobic property on insulator surface in which prevents water layer formation and leakage current occurrence on its surfaces. Low Molecule Weight (LMW) component is diffused from the bulk of SiR to the outer surface layer. LMW will then film the pollutant and maintain the hydrophobic property (i.e. prevent water layer formation on insulator surface)[8]. The application of ATH in Silicon Rubber insulator gives better erosion and tracking resistance. Surface degradation of elastomer silicon with lower filler (025%) will make tracking phenomenon occur faster, while higher filler (50% - 70%) will delay surface degradation due to dry band arcing phenomenon and reduce water absorbing [9]. The application of polysiloxane on the mixture of epoxy and RHA as the filler affects the hydrophobic property. The higher filler concentration the bigger contact angle of water droplets[10]. Insulation contamination become a troublesome in electrical power system operation. Wet atmospheric condition will form thin water layer on the insulator surface, and the present of the contaminant inside the layer will create leakage current on the insulator surface [11]. Contaminant layer accumulation at the surface of the insulation will increase the leakage current which

113

ISSN: 2088-6578

Yogyakarta, 12 July 2012

cause flashover voltage resulted the damage of insulator [12]. With the consideration that electrical property of polymeric insulator depend on the type of material and the environment condition, this paper will discuss the experiment resulted the equivalent salt deposit density and flashover volatge of epoxy-polysiloxane polymeric material with rice husk ash filler under natural tropical climate effect in Yogyakarta. II.

EPOXY-POLISILOXANE POLYMERIC INSULATOR

A. Epoxy Resin Polymeric Insulator Material Epoxy is a thermosetting chemical compound. It consists of oxygen and carbon atomic bond which is generated by chemical reaction between epichlorohydrin and bisphenol A. A complex structure of epoxy resin has molecule bond as shown in Fig 1.

CITEE 2012

Silicone rubber is one of polysiloxane which has a side group of CH3. The difference of chemical elements found on each bond gives different properties on each polymer. The applications of Polysiloxane are: 1) statue and frame molding material, 2) filler liquid and insulation coating for electrical and electronic equipment, 3) Filler compound for polyester, 4) electricity insulator raw material, 5) base material of human body treatment product which is chemical and UV radiation proof. Polysiloxane can be formed trough addition polymerization and condensation polymerization. Polysiloxane in addition polymerization formed at Siloxane bonds (Si-O) which have double bond as vinyl (--C=C--) at the edge while in the condensation polymerization it formed at siloxane bond which have hydroxyl group at the edge. Addition polymerization reaction of Polysiloxane using chloroplatinic acid catalyst to create polydimethysiloxane (PDMS) can be shown in Fig. 4.

Figure 1. Epoxy resin structure

Epoxy resin will harden when it is combined with hardener, catalyst and filler. It’s used widely as insulator, house hold, machinery component, automotif, tank and pipe, aeroplane body part, bridge construction, etc. Epoxy resin formation reaction with MPDA hardener is shown in Fig 2.

Figure 4. Addition polymerization reaction process resulting Polydimethylsiloxane (PDMS)

Networking of silicon rubber chains do not have carbon atoms in the backbone chain but is only found the side group. The composition of these structures shows semi-organic structure with high-energy bond in S-O bond and will provide a very high thermal stability as well. Si-O bond energy is higher by about 25% of the bond energy which is owned by C-C bond as ethylene polymer. Comparison of the major bond energy which is owned by the various polymers are shown in Table I . TABLE I THE MAIN BINDING ENERGY OF POLYMER Figure 2. Epoxy resin formation reaction with MPDA hardener

B. Polysiloxane Polymeric Insulation Material Poyisiloxane is a polymeric with silicon and oxygen atom as its main chain. Polysiloxane main chain is more flexible than vinyl polymeric and polyolefin. The chemical structure of polysiloxane is shown in Fig.3.

Figure 3. Chemical structure of Polysiloxane

114

Atomic Bond

Bond Energy ( kJ/mol)

C-H C-O C-N C-C H-H H-O H-N Si-O Si-H Si-C Si-Si O-O O-H

413 360 308 348 436 366 391 445 318 318 222 195 366

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Thermal stability of Si-O bond is very good, but relatively high ionic property makes it easily be broken by high acid or alkali concentration [13]. Strong bond of Si-O gives very high resistance property of silicon rubber against the possibility of being damaged by environment condition or corona phenomenon and also gives similar property as glass or quartz which won’t left a conductive layer when it burn, for example burning caused by arching. Instead of the sufficient thermal property, silicon rubber has stable elasticity within temperature range of -50oC up to +230 oC (long thermal stability) which is the most important property criteria of electrical insulation [14]. C. Rice Husk Ash Rice husk usually become main firing material (or fuel) in brick industry or plant seeding media, while its ash usually become rub ashes or wasted. Rice husk burning will produce RHA with very high SiO content ( >80% from total weight). When continuous burning at 500 to 700 oC for 1-2 hours applied, RHA will contain large amount of amorf silica which can increase mechanical property of cement-RHA mixture at certain dosage [15] [16]. III.

ESDD AND FLASHOVER VOLTAGE ON INSULATOR SURFACE

A. Equivalent Salt Deposit Density (ESDD) Outdoor insulator surface will be contaminated by pollutant which is carried by the wind. Pollutant material basically divided as 2 major components: conductive component and inert component. Conductive component mostly consist of ionic salt such as sodium chloride (NaCl), sodium sulfate (Na2SO4), magnesium chloride (MgCl2) etc. Inert component is a solid material or cation which can’t be unraveled to be ion in liquid (for example: kaolin, silica, bentonite, cement, etc). This non soluble component can create mechanical bond with conductive particle. Kaolin, cement dust and silica will make the insulator surface become hydrophilic while oil create hydrophobic surface. Contamination level which is caused by a certain salt can be measured by using ESDD method expressed in mg/cm2. The ESDD procedure include : measuring conductivity of water and cotton mixture which are used to clean insulator surface (before and after cleaning) at certain room temperature then converse the value into standard 20oC conductivity by using correction factor shown in Table II [17]. TABEL II.

B FAKTOR

0

θ( )

b

5 10 15 20

0.03156 0.02817 0.02277 0.01905

Next step is calculating the conductivity by using folowing equation (1)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

κ 20 = κθ [1 − b(θ − 20)]

(1)

Where : κ20= standar conductivity in 20 oC (μS/cm); κθ = conductivity at room temperature θoC (μS/cm); θ = temperature of liquid (oC); b = correction factor in temperature θ With the value of conductivity at 20oC, now calculate salt concentration in % using following equation (2) D=

(5,7 x10 −4 xκ 20 )1, 03 10

(2)

Where : D = Salt concentration (%); κ20 = Conductivity at 20oC(μS/cm). The ESDD can be calculated by using following equation (3) ( D − D1 ) (3) ESDD = 10 xVap x 2 S

Where : ESDD = Equivalent Salt Deposit Density (mg/cm2) ; Vap = washing water volume (ml); D1= Equivalent salt concentration of water and cotton before washing (%); D2 = Equivalent salt concentration of water and cotton before washing (%); S = washed Insulator surface area (cm2). If the conductivity κ20 value only 0,004-0,4 S/m, the following equation (4,5). Sa= (5,7 κ20)1,03

(4)

ESDD = (Sa V)/A

(5)

Where Sa = salt content at insulator surface in kg/m2, V = washing water volume in cm3, and A = washed insulator surface area. B. Flashover Voltage Flashover voltage is an external disturbance on insulator surface as arc which can happen in solid surface or gas. Flashover voltage value is smaller than puncture voltage on an insulator. Several factor affect flashover voltage, such as : surface resistance of a material, surface condition and electric field shape between electrodes and insulator. Flashover voltage on polluted condition follows this following sequences [18][19] : 1) Insulator with dry band and conductive pollution layer are represented as series arc path along x distance (air gap) with resistance of polluted layer at every length unit R’=R’(I), as Fig. 5.

Figure 5. Model of insulator with dry band and contaminated layer to determine critical value of leakage current I and voltage U

115

ISSN: 2088-6578

Yogyakarta, 12 July 2012

2) The surface of insulator between two electrodes is an electric arc which is series with conductive polluted layer. 3) The arc can extend or shut off. If the electric field strength reaches certain value at a certain point of dry band, the electric discharge which can initiate flashover happen. 4) Flashover voltage occurs when the arc covers entire dry band and further the entire insulator surface between two electrodes. The relationship between surface conductivity and flashover voltage can be formulated as : −n

1

Vfo = N ( n +1) κ ( n +1) L −n

1

Vfo = N ( n +1) κ ( n +1) L

E=N

1 (n +1)

κ

−n (n +1)

(6) (7) (8)

where : Vfo= flashover voltage (kV); E = electric field (kV/cm); κ = surface conductivity due to pollutant (μS/cm); L = length of arc when the flashover happen (cm). During the flashover test, the applied voltage is increased step by step until the critical voltage for every insulator sample reached. The measurement value then be calibrated by using air condition standard [20] as the following formula: V (9) Vs = B d d=

b B 273 + 20 0,386 b B x = 760 273 + t B 273 + t B

(10)

CITEE 2012

Hanna Instrument No Code HI8633. All of these testing tools are available in High Voltage Laboratory, Electrical Engineering and Information Technology, Gadjah Mada University. B. Testing Procedure The test samples are placed outdoor, outside Electrical Engineering building, Gadjah Mada University. Samples are placed in support bracket with 45° tilt, flipped every week and taken for performance test every 2 weeks. This procedure applies for 52 weeks experiment period. The ESDD procedure will follow these steps: 1) Preparing a beaker, measuring cup, and cleaning cotton; 2) Filling a beaker with 200 ml distilled water; 3) putting the cotton inside the beaker, measuring the temperature and conductivity; 4) Separating the pollutant layers from the samples by rubbing the soaked cotton, then putting the pollutant and the cotton into the beaker, and finally stirring them; 5) Measuring the temperature and conductivity of water and cotton mixture. The conductivity with and without pollutant are conversed into standard conductivity at 20Ԩ by using b factor based on IEC 507 standard (1991), then calculating salt concentration (in %) by using equation (2). The ESDD can be calculated the by using equation (3). Flashover measurement procedure can be explained as following steps: 1) Putting test samples into fog chamber with 70% humidity (after fogging); 2) Applying voltage. Every sample is tested by applying increase voltages with a step of 1.5 kV/sec until flashover occurs; 3) Recording temperature, humidity and air pressure then converting flashover voltage value by using standard value correction factor (as eq. 10 and eq. 11). The measurement tools arrangement is shown in Fig. 6.

Where : Vs = Standard flashover voltage (Volt); VB = measured flashover voltage (Volt); d = relative air density (mmHg/oC); tB = ambient temperature during test (oC); bB = ambient air pressure during test (mmHg). IV.

EXPERIMENTAL PROCEDURE

A. Material and Testing Tool The material used in this test consist of : 1) Epoxy resin with base material DGEBA, hardener MPDA from Eposchon brand, 2) PDMS RTV 585 with 60R catalyst from Rhodorsil brand, 3) Rice husk ash 325 mesh type IR 64 as filler. Tested sample was made in round shape with 70 mm diameter and 5 mm thickness named with code RTVEP1(filler 10%), RTVEP2(filler 20%), RTVEP3(filler 30%), RTVEP4(filler 40%), and RTVEP5(filler 50%). The testing tools consist of : 1) Fog chamber 1,2 x 1,2 x 1,2 m as testing chamber, 2) High voltage transformer with 220/100000 volt, 5 kVA, 50 mA current, 4% impenandce, 50/60 Hz transformer, 3) Thermometer for temperature reading, hygrometer for humidity reading and barometer for air preassure reading, 4) Oscilloscope LeCroy 9354 AL, 500MHz, 5) Conductivity meter,

116

Figure 6. The measurement tools of flashover voltage test

V.

RESULTS AND DISCUSSION

The following sections below show the ESDD and flashover voltage results of insulator materials of epoxypolysiloxane with rice husk ash filler for each composition during 52 weeks experimentation in tropical climate area .

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

A. ESDD of epoxy-polysiloxane polymeric insulator with rice husk ash filler The result of ESDD calculation by using equation (3) of epoxy-polysiloxane polymeric insulator with rice husk ash filler is presented in two dimentional curve shown in Fig. 7.

Figure 7. ESDD value against duration of aging for various filler composition in 52 weeks natural aging.

ESDD value of epoxy-polysiloxane polymeric insulator with rice husk ash filler with various compositions is fluctuating during the aging process. This phenomenon happened due to variation of composition and local climate condition during the test. The highest the filler composition the lower the ESDD value is. The increase of filler material then makes methyl group (CH3) on insulator surface also increases. This makes the insulator material more hydrophobic[10]. When there is a rain, the insulator surface will be wiped up by the water. Pollutant layer on insulator surface will decrease. This hydrophobicity will decrease the ESDD and the leakage current, that way the power losses will be decrease as well [19]. Figure 7 shows ESDD value from initial aging until week 12th which fluctuates and tends to increase (November 2010 to January 2011). It is because of the tropical climate with low rain intensity. From week 14th to 28th (February - May 2011), the ESDD value tends to decrease due to high rain intensity (rainy season). From week 30th to 44th (Jun - Sept 2011), the ESDD tends to increase due to entering summer season. From week 46th to the end of week 52nd (September - November 2011) the value of ESDD decreases due to the beginning of rainy season. The results of measured ESDD for every samples are: RTVEP1 = 0.0016 - 0.0586 mg/cm2 with average of 0.0226 mg/cm2; RTVEP2 = 0.0015 - 0.0572 mg/cm2 with average of 0.0195 mg/cm2; RTVEP3 = 0.00104 - 0.0557 mg/cm2 with average of 0.0166 mg/cm2; RTVEP4 = 0.00104 - 0.0411 mg/cm2 with average of 0.0164 mg/cm2; and RTVEP5 = 0.00104 - 0.0403mg/cm2 with average of 0.0171 mg/cm2

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

B. Flashover voltage of epoxy-polysiloxane polymeric insulator with rice husk ash filler The result of flashover voltage by using equation (9) of epoxy-polysiloxane polymeric insulator with rice husk ash filler for every composition after 52 weeks of natural aging is presented in two dimentional curve shown in Fig. 8.

Figure 8. Standard flashover voltage against duration of aging for various filler composition in 52 weeks natural aging

The standard flashover voltage of epoxy-polysiloxane polymeric insulator with rice husk ash filler with various compositions tends to decrease until week 52nd. The standard flashover voltage is affected by the ESDD value on the insulator surface. The higher ESDD, the lower flashover voltage is. The ability of an insulator to withstand surface voltage is affected by pollutant layer and surface wetting phenomenon[21]. In wet condition, a conductive layer is formed on the surface. The surface conductivity will increase if the surface pollutant layer is thicker. This makes the leakage current increase. An nonuniform distributed conductive layer in epoxypolysiloxane polymeric insulator with rice husk ash filler insulator surface causes a formation of dry band where the leakage current flows[18,19]. The insulator surface between both electrodes is a form of electric arc that series connected with a conductive pollutant layer. The arc can extend or shrink. When the electric field strength reaches certain value in dry band, then the surface electric discharge occurs, initiates a flashover voltage on epoxy-polysiloxane polymeric insulator with rice husk ash filler insulator surface. Figure 8 shows standard flashover voltage from initial aging until week 12th which fluctuates and tends to decrease. From week 14th to 28th, the standard flashover voltage tends to increase. From week 30th to 44th, the standard flashover voltage tends to decrease again. From week 46th to the end of week 52nd ,the standard flashover voltage tends to stable. The results of standard flashover voltages for samples are: RTVEP1 = 34.348 - 40.916 kV with average of 35.867 kV, RTVEP2 = 34.496 – 39.334 kV with average of 35.541 kV, RTVEP3 = 34.685 - 39.091kV with average of 36.017 kV, RTVEP4 = 34.160 - 39.072 kV with average of

117

ISSN: 2088-6578

Yogyakarta, 12 July 2012

35.827 kV, and RTVEP5 = 34.752 - 37.169 kV with average of 35.741 kV. VI. 1. 2.

3. 4.

CONCLUSION

The higher the content of the filler with PDMS, the lower of ESDD and increase of flashover voltage in rainy season. The ESDD and flashover voltage of epoxypolysiloxane polymeric insulator with rice husk ash filler for each filler composition are (in average value) RTVEP1 : ESDD=0.0226 mg/cm2, Vfo = 35.867 kV; RTVEP2 : ESDD=0.0195 mg/cm2 , Vfo = 35.541 kV; RTVEP3 : ESDD=0.0166 mg/cm2 , Vfo = 36.017 kV; RTVEP4 : ESDD=0.0164 mg/cm2, Vfo = 35.827kV; and RTVEP5 : ESDD=0.0171 mg/cm2 , Vfo = 35.741 kV. The experiment suggests that RTVEP3 is feasible for insulation material as its flashover voltage is the highest among other samples. The Flashover is influenced by ESDD, and the ESDD depend on climate. So it can be concluded that this experiment results can be applied in other area with the same climate with Indonesia.

ACKNOWLEDGEMENT The authors would like to say thank you to Mr. Daryadi and Mr. Prasetyohadi as laboratory assistant of High Voltage laboratory, Electrical Engineering and Information Technology, Gadjah Mada University for their assist during this experiment completion.

REFERENCES [1]

G. G. Raju, Ed, Dielectris in Electric Fields, New York: Marcel Dekker Inc, 2003, vol .19. [2] K.J. Saunders,Ed, Organic Polymeric Chemistry, New York: John Wiley & Sons, 1973. [3] N.H. Malik, A.A. Al-Arainy, and M.I. Quresshi, Ed. Electrical Insulation in Power System. New York : Marcel Dekker Inc, 1998, vol.3. [4] E.A. Cherney, “Non ceramic insulator. a simple design that requires careful”, IEEE Electrical Insulation Magazine, vol 12, pp. 7-15, 1996

118

CITEE 2012

[5]

Kahar, Nes Yandri.,“Penelitian Tentang Epoksi Sikloalifatik Tuang (EST) Sebagai Bahan Isolasi Listrik Tegangan Tinggi Di Daerah Beriklim Tropis”, P.hD.Disertasi, Institut Teknologi Bandung,1998. [6] G.E., Mortiner, “Chemie, George Thieme Verlag, Stuttgart. 1983 [7] Sorqvist, U. Karlson, and A.E. Vlastos, “Surface ageing of polymerics insulator”, IEEE Trans. On Pow. Delivery, vol .5, pp. 406-414, 1995. [8] J.P., Reynders, I.R.., Jandrell, S.M., Reynders, ”Review of Agingand Recovery of Silicon Rubber Insulation for Outdoor Use”, IEEE Transaction on Dielectrics and Electrical Insulation, 6(5), 620-631, 1999. [9] S.H. Kim, S.H. E.A. Cherney, and R. Hackam, “Effects of Filler level in RTV silicon rubber coating used in HV Insulator”, IEEE Trans. On Pow. Delivery., 1992 [10] Jaya, A., Berahim, H., Tumiran, Rochmadi, “ The Hidrophobicity Improvement of High Voltage Insulator Based on EpoxyPolysiloxane an Rice Husk ash”, Proceedings of The International Conference on Electrical Engineering and Informatics, BandingIndonesia, Vol 2, 1231-1236, 2011. [11] E.A.,Cherney, R.,Hackam, and S.H.,Kim, “Porcelain Insulator Maintenance With RTV Silicone Rubber Coatings”, IEEE Trans. On Pow.Delivery., 6(3), 1177-1181,1991. [12] GuoxiangXu., and P.B., McGrath, “Electrical and Thermal Analysis of Polymeric Insulator under Contaminated Surface Condition”,IEEE Trans. on Dielectrics and Electr. Insul., 3(2), 289 – 298,1996. [13] C Burger, W.R Hertle, P.Kochs, F.H. Kreuzer, and H.R. Krichedorf, Silicone and Polymeric Synthesis, Hamburg, Germany : Springer,. 1996, [14] D. Kind, H.C. Kaerner, High Voltage Insulation Technology, Textbook for Electrical Engineers, Friedr Vieweg & Sohn, Braunschweig/Wiesbaden, 1985. [15] H. Onggo., “Proses And Sifat Campuran Abu Sekam Semen”, Telaah, vol. 9, pp. 27-38, 1986. [16] Priyosulistyo., Sumardi., Sudarmoko., Bambang Suhendro, and Bambang Supriadi, “Pemanfaatan Limbah Sekam Padi untuk Meningkatkan Mutu Beton, Teknik Sipil, UGM Yogyakarta, 2004. [17] Artificial Pollution Test on High-Voltage Insulator to be used on AC Systems, Second Edition, Geneva. IEC Std 507, 1991. [18] F. Obenaus, ”Fremdschict Uberschlang Und Kreischweglange”, Deutche Electrotechnik, Vol. 4, pp. 135-136.1985. [19] Y. Mizuno., H. Kusada., and Naito, K., “Effect of Climatis Conditions on Contaminntion Flashover Voltage of Insulators”,IEEE Trans. on Pow. Delivery., 10(3), 1378-1383. 1997. [20] High Voltage Test Techniques, Second Edition, IEC Std 601,1989. [21] G.G. Karady, Shah, M., and R.L.,Brown, “Flashover Mechanism of Silicon Rubber Insulators used for outdoor Insulation – I”, IEEE Trans. on Pow. Delivery., 10(4), 1965-1971, 1995.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Tracking Index of Epoxy Resin Insulator Abdul SYAKUR1*, Tumiran1, Hamzah Berahim1, Rochmadi2 1*

Doctoral Student at Dept. of Electrical Engineering & Information Technology, 1 Dept. of Electrical Engineering & Information Technology, 2 Dept. of Chemical Engineering, Gadjah Mada University Jln. Grafika 2 Yogyakarta, 55281, Indonesia

*e-mail : [email protected] Abstract — Polymeric insulating materials such as epoxy resin and silicon rubber have been developed and used in electrical power system to replace porcelain and glass insulator. The dielectric properties of polymeric materials are better than porcelain and glass and can be made at room temperature so it is economically more advantageous. Environmental factors such as ultraviolet radiation intensity, temperature, humidity, rain and pollution, especially in a tropical climate may affect the insulator performance. Degradation of the quality of insulating surfaces can be determined using the length tracking, leakage current and the depth of surface damage insulation. These parameters were compared to determine the index of tracking (Ti). In this work, the materials used are made of diglycidyl ether of bisphenol-A (DGEBA), metaphenylene diamine (MPDA), silane and silica as a filler. Dimension of the tested material is 120 mm x 50 mm x 5 mm. The method of measurement used Inclined-Plane Tracking (IPT) accordance with IEC 587:1984 standard with NH4Cl contaminants. Ultraviolet radiation was subjected to each test samples with variation time 24 hours up to 96 hours to know influence before and after subjected with UV to Tracking Index. The results of the research showed that the surface degradation most damage on the sample after sample was subjected UV with 96 hours. From this research can be concluded that the tracking index affected by ultraviolet radiation. Keywords: leakage current, contact angle, tracking index, uv radiation, contaminant.

I.

INTRODUCTION

Polymeric insulating materials such as epoxy resin and silicon rubber have been developed to replace porcelain and glasses insulator in electrical power system. They are have some good dielectric properties, light weight and compact, when compared to the porcelain or glass insulators. However, polymer outdoor insulator showed degradation due to climate stresses such as ultraviolet in sunlight, moisture, temperature, humidity and the other contaminants so that the surface discharge, tracking, and erosion can occur, and degradation may reduce the performance. This reduction is actually the result of chemical and physical changes taking place on the surface of polymer [1]. Epoxy resin is an important electrical insulating material. It is a thermo set polymer in which two components are mixed to eventually form a glassy product at room temperature. Epoxy resins are used in a large

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

number of fields including surface coatings, adhesives, in potting and encapsulation of electronic components, in tooling, for laminates in flooring and to a small extent in molding powders and in road surfacing Compared with the polyesters, epoxy resins generally have better mechanical properties and, using appropriate hardeners, better heat resistance and chemical resistance, in particular, resistance to alkali. The electrical properties of epoxy resins have a dielectric constant about 3.4 – 5.7, and a dielectric strength about 100 – 220 kV/cm. Power factor of epoxy resins are about 0.008 – 0.04 [2]. Outdoor environmental conditions vary over a wide range. Temperature affects the insulation properties of all materials as the conductivity increases with temperature. For polymers that are organic materials, ultraviolet radiation from sunlight can break certain chemical bond and cause cross linking on the surface, resulting in the surface degradation. In the presence of contamination, the surface resistance is reduce even more drastically therefore the presence of contaminants on the surface of the insulator and UV radiation become a serious problem. Generally non-ceramic insulators perform better than ceramics when new. However, due to aging of polymer housing, this relative difference can change along with time at a rate depending on the environment [3]. Epoxy resin is a hydrophilic material, therefore, in particular, in the tropical area; UV radiation, humidity and rainfall play an important role in accelerating of degradation process on the surface of the insulator[4]. Contamination layer will be formed on the surface of the insulator and it would spread on the surface. Leakage current increase, especially when the insulator surface wet caused by fog, dew or light rain. Leakage current initiates a process of heat conduction which occurs on the surface of an insulator and finally flashover or insulation breakdown would occur. To improve the performance of materials silicon rubber was compounded. Silicone rubber has the ability to resist water on the surface of the material so that water does not stick. The performance of silicone rubber is very good for insulation. Therefore, silicone rubber is compounded with epoxy resin to improve the performance of epoxy resin surface. Silica sand was filled with epoxy resin.. This paper presents investigation on tracking index of epoxy resin compound with silicon rubber and filled silica sand using Inclined-plane tracking method based on IEC 587:1984 with contaminants NH4Cl and 3.5 kV being applied. The tracking index was observed based on the different of UV radiation and filler concentration. The

119

ISSN: 2088-6578

Yogyakarta, 12 July 2012

influences of filler treatment on hydrophobic angles were also investigated. II.

FUNDAMENTAL THEORY

A. Hydrophobic Contact Angle Contact angle measurement on an insulating material was conducted to determine the surface properties of materials what is hydrophobic or hydrophilic. Hydrophobicity is a characteristic of insulating materials which in polluted conditions, the material is still able to resist water that falls onto the surface. Hydrophobic properties are useful for outdoor insulator because in wet or humid conditions, water continuous flowing between the tip - tip of an insulator will be not formed, and the surface conductivity of insulators will remain low, resulting in very small leakage current.

CITEE 2012

smooth and uniform surfaces. If the surface is rough or heterogeneous, then the system is in a state of unstable and the contact angle measurement is also unstable. In this case the contact angle depends not only on the surface tension but also depends on surface roughness and grain volume of water. Contact angle gives information about the surface energy, hardness, and surface heterogeneity. Moreover the contact angle is also the indication of contaminated surface.

a. Hydrophilic b. Hydrophobic Figure 3 Classification of contact angle

Figure 1 Contact angle measurement Contact angle is the angle formed between the surfaces of the test material with the surface of distilled water that dripped into the test material surface. This measurement uses 50 micro liter droplets of water that is dripped on the surface of an insulating material [4]. Profiles of water droplets were taken after two minutes the water has dripped on the surface of an insulating material. Profile of a drop of water was projected on the screen and the contact angle (180° - γ) could be determined as illustrated in Figure 1. The relationship between surface tension of solid materials, air (gas) and water are shown in Figure 2.

Based on Fig 3 above contact angles are classified into two groups. For angles smaller than 90o (Figure 3.a) then the material is called wet or hydrophilic, and the contact angle more than 90o (Figure 3.b) is called the hydrophobic or water-repellent. C. Leakage Current and Tracking Index Leakage current is the current that flows through the conductor to ground. In the absence of a grounding connection, it is the current that could flow from any conductive part or the surface of non-conductive parts to ground if a conductive path was available. There are always extraneous currents flowing in the safety ground conductor. If there is no insulation failure, interruption of the leakage currents flowing through the ground conductor could ignored. The leakage current flowing on the surface of the sample can cause damage. The rate of surface damage can be determined by the surface area damaged divided by the time duration the damage occurred. Damage to the sample surface was compared to other samples. It is proposed the ratio of surface damage area is called tracking index (Ti).

III. Figure 2 Illustration of interface tension and equilibrium contact angle. B. Contact Angle Classification The basic equation for the solid surface tension measurements, by measuring the contact angle is given by Young's equation: γSI = γSV – γLV. Cos θe .................1 Parameters of the Young equation, γSV ,γSI , and γLV are the interface tension of the solid / gas, solid / liquid and liquid / gas respectively, and θe is the equilibrium contact angle. A stable equilibrium is obtained by providing ideal

120

EXPERIMENTAL SET UP

A. Materials Preparation and Test Sample The test materials in this experiment used epoxy resins based on DGEBA and MPDA compound with silicon rubber (SiR) and filled with silica sand. The composition of materials DGEBA, MPDA, Silicon Rubber and Silica sand are 30%, 30%, 20% and 20% respectively. The dimension of test materials were 120 mm x 50 mm with a thickness of 5 mm. Test materials must be drilled to place electrodes as illustrated in Figure 4 as follow:

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

popularly known as Inclined-Plane Tracking (IPT) test method [5]. AC High voltage 50 Hz with a voltage of 3.5 kV was generated from 5 kVA transformer test. Resistor 22 k was used to resist the current flowing on the surface of the material in the event of discharge.

Figure 4 Dimension of test sample B. Electrodes Electrodes in this experimental, fixtures and assembly elements associated with the electrodes, such as screws, shall be made of stainless steel material. The electrode assembly is shown in Figure 5 (all dimension in mm). The top electrode is shown in Figure 5.a and the bottom electrode is shown in Figure 56.b.

(a) (b) (c) Figure 5 (a) Top, (b) Bottom (c) electrode assembly C. Contaminant and Filter paper Contaminant used had concentration of 0.1 ± 0.002 % by mass of NH4Cl (ammonium chloride) and its conductivity is 2170 S/cm. These contaminants were flowed on the surface of materials using a peristaltic pump. There were eight layers of filter-papers as a reservoir for the contaminant, which were clamped between the top electrode and the specimen. The approximate dimensions were given in Figure 6.

Figure 6 Filter paper with eight sheets

Figure 7 schematic diagrams for this test All the high voltage equipment was properly grounded for safety purpose. Surface tracking was monitored by measuring the surface leakage current (LC) that flows on the material surface. Peristaltic pump was used to drain the solution of contaminants with flow rate was 0.3 ml/min along the underside of the sample. In this test, the constant voltage method was used, and the time to start tracking was also determined. Leakage current will be read and recorded by Oscilloscope in time of discharge at the surface of the material. Measurement data in the form of leakage current, discharge current and discharge time of the first occurrence was then stored and used to analyze the surface condition, the effect of contaminants on the electrical tracking processes at the surface of insulation. E. Ultraviolet raditaion mechanism Influence of ultraviolet rays on epoxy resin insulator material can be determined by test the influence of ultraviolet rays. Radiation was performed in an artificial chamber with dimensions are 50 cm x 50 cm x 50 cm with a slope of 450 (ASTM 2303). The room is made of wood with aluminum foil coated with the side in order for it to ultraviolet rays radiated optimally and to prevent leakage of ultraviolet rays out of the box. Capacity of this box can contain a number of test samples up to 15 pieces of insulation. Ultraviolet light source in this study came from 4 (four) pieces of ultraviolet fluorescent lamp 15 Watt Philips brand to simulate the intensity of the ultraviolet radiation UV average 21.28 Watt/m2. For the form of ultraviolet light radiation on insulating samples more details can be seen in Figure 8.

D. Test Circuit The system for evaluating the surface tracking of solid polymer insulating materials is shown in Figure 7. The test is based on IEC 587:1984 standard method and

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

121

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

B. Ultraviolet radiation influenced Degradation rate Measurement of damage area from RiS Silica 20% was obtained that the effect of UV radiation significantly when the sample after subjected of UV during 96 hours with damage area 178 mm2. The damage of area before being subjected of UV is 149 mm2 as shown in Figure 8 as follow:

Figure 8 Chamber of UV radiation test IV.

RESULTS AND DISCUSSSION

A. Hydrophoobicity Contact Angle Contact angle measurement on an insulating material was conducted to determine the surface properties of materials (hydrophobic or hydrophilic). Hydrophobicity is a characteristic of insulating materials. In polluted conditions, the material is still able to resist water that falls onto the surface as shown in Figure 9. Hydrophobic properties are useful for outdoor insulator because in wet or humid conditions, water continuous flowing between the tip - tip of an insulator will be not formed, and the surface conductivity of insulators will remain low, resulting in very small leakage current.

Figure 11 Correlation duration of UV to wide damage While the lowest of damage area occurs after 72 hours is 130.67 mm2, It is due when the test sample is made there are some a void and making less solid, so less precision.

Figure 9 contact angle of test samples Measurements of angle contact were done on samples of epoxy resin compound with silicon rubber and filled with silica sand as shown in the following figure 9: Figure 12 Correlation between degradation rate to duration of UV

Figure 10 Measuring of contact angle using image pro. To understand contact angle from the test sample we measure contact angle using IPWIN32.exe. Based on figure 10 we know that the contact angle of sample is 87o. For angles smaller than 90o then the material is called wet or hydrophilic.

122

Based on Figure 12 we know that the degradation rate (Dr) attend to increase when the UV radiation more longer time. The calculation of degradation rate RIS Silica 20%, it was found that with the influence of UV radiation time on degradation rate in the sample test materials began to rise significantly, which at the time of UV radiation during 0 hour (without UV radiation) degradation rate is 0.02 mm2/s and degradation rate (Dr) become 0.05 mm2/s after the time of UV radiation during 96 hours. C. Tracking Index (Ti) of EpoxyResin In this paper we propose the terminology of Tracking Index (Ti). Tracking index can be obtained from ratio

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

between degradation rate after sample was subjected UV radiation to sample before subjected with UV. We can write into equation as follow: 𝑇𝑖 =

ISSN: 2088-6578

Table3. Results of tracking index and Quality Duration of UV (hours)

Composition (%)

Ti

Level Quality

24

5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25

2,87 9,11 5,62 6,63 7,10 7,18 19,34 17,68 10,93 8,33 19,28 30,58 19,80 9,65 9,83 13,21 18,01 36,53 26,71 13,22

Good Good Good Good Good Good Enough Enough Good Good Enough Worst Enough Good Good Enough Enough Worst Worst Enough

𝐷𝑟𝑡 𝐷𝑟𝑜

Where : 𝑇𝑖 = 𝐼𝑛𝑑𝑒𝑥 𝑜𝑓 𝑇𝑟𝑎𝑐𝑘𝑖𝑛𝑔 𝐷𝑟𝑡 = 𝐷𝑒𝑔𝑟𝑎𝑑𝑎𝑡𝑖𝑜𝑛 𝑟𝑎𝑡𝑒 𝑡𝑖𝑚𝑒 𝑡 𝐷𝑟0 = 𝐷𝑒𝑔𝑟𝑎𝑑𝑎𝑡𝑖𝑜𝑛 𝑟𝑎𝑡𝑒 𝑡𝑖𝑚𝑒 0

48

Degradation rate was defined as ratio between damage areas (mm2) with time to damage (s). 72

𝐷𝑎𝑚𝑎𝑔𝑒 𝐴𝑟𝑒𝑎 (𝑚𝑚2 ) 𝐷𝑟 = 𝑇𝑖𝑚𝑒 𝑡𝑜 𝑑𝑎𝑚𝑎𝑔𝑒 (𝑠𝑒𝑐𝑜𝑛𝑑) Based on data from damage areas of test sample and time to occur damage of sample we can calculate and define tracking index as shown in Table 1 as follow:

96

Table 1 Index of Tracking (Ti)

V.

From table 1 above we can classified tracking index onto 3 group related condition of sample surface before and after subjected with UV radiation. Table 2 Classification of tracking index Group Ti Level of Quality I 01 – 13 Good II 14 – 25 Enough III 26 – 37 Worst Table 2 above being used for determined the quality of surface based on tracking index value. Table 3 shows test sample of Epoxy Resin with silica filler composition varieties from 5% to 25% under UV radiation during 24 hours up to 96 hours. From table 3 we know that the test sample after subjected UV radiation during 24 hours still have good surface quality for all composition. But, after 24 hours under subjected of UV radiation almost test sample have enough and worst surface quality. It is find that tracking index tend to increase when the test samples under UV radiation. After it was subjected UV radiation 48 hours, test sample RISS with composition 10% and 15% decrease from good level become enough level. It is show that UV radiation influence of surface quality of sample test. Also, when the test sample RiSS with filler composition are 5% and 15% after was subjected UV radiation during 72 hours become enough level.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CONCLUSSION

Based on the discussion we can conclude that the test material is hydrophilic with contact angle is 87o. The duration of ultraviolet radiation caused damage area to the surface of the greater test sample. Degradation rate of the test materials faster in materials that have been subjected to UV as well as obtained the index tracking the material that has not been subjected to UV is much smaller than the index tracking the material that has been subjected to UV. Tracking index was affected by UV radiation. REFERENCES [1]

[2] [3] [4]

[5]

[6]

[7]

H. Berahim, K.T. Sirait, F. Soesianto, Tumiran, A new performance of RTV Epoxy Resin Insulation material in tropical climate. Proceedings of the 7th International Conference on Properties and Applications of Dielectric Materials, June 1-5, 2003, Nagoya .p. 607 J.A. Brydson, Plastic Materials. 4th edition, Butterworth Scientific, 1982 p.693-695 Gorur, R.S., E.A. Cherney, J.T. Burnharm, Outdoor Insulators. Gorur, Inc., Phoenix, 85044 USA, 1999. Berahim H., “Methodology to assess the performance of silane epoxy resin insulating polymer as high voltage insulator materials in the tropical areas”. Ph. D Dissertation at Department of Electrical Engineering, Gadjah Mada University, Indonesia , 2005 IEC 587, BS Test Method for Evaluating Resistance to Tracking and Erosion of Electrical Insulating Materials used under severe Ambient Conditions, British Standards Institution, 1984 Kumagai, S., and Yoshimura, N. “Tracking and Erosion of HTV Silicon Rubber and Suppression Mechanism of ATH”, IEEE Transaction On Dielectric and Electrical Insulation 8(2): pp 203211, 2001. Syakur, A., Berahim, H., Tumiran, Rochmadi, “Experimental Investigation on Electrical Tracking of Epoxy Resin Compound with Silicon Rubber” Journal of High Voltage Engineering, Vol. 37, No. 11, November 2011, China.

123

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

SEMICONDUCTING CNTFET BASED HALF ADDER/SUBTRACTOR USING REVERSIBLE GATES V.Saravanan1 , V.Kannan2 1 Research Scholar, Sathyabama University, Tamilnadu, India 2Principal, Jeppiaar Institute of Technology, Tamilnadu, India. 1

[email protected]

2

[email protected]

Abstract: Reversible gates plays more and more prominent role in the design of Low Power VLSI circuits, Quantum Computing, Nanotechnology and Optical Computing. In the conventional gates such as AND, OR and EX-OR the power dissipation is more since they are not reversible and the power dissipation is directly proportional to the number of bits erased during computation. This paper proposes the reversible gates can be used for logic circuit design and to reduce the power dissipation in the circuits. In this paper carbon nanotube field effect transistor is introduced for designing the various reversible logic gates and half adder/subtractor circuits. Three design techniques are used for designing the half adder/subtractor. Performance of each designs are compared in terms of no of gates used, average power, power delay product for both CMOS and CNTFET technology. All the simulations are done in HSPICE environment. Keywords: carbon nanotube, reversible gates, CNTFET, logic gates.

I.INTRODUCTION In the last few decades CMOS technology have been used for designing most of the digital circuits. In order to reduce the fabrication area required to implement the digital circuit on chip, the size of the MOS devices used in CMOS technology are scaled down. When the physical gate length of conventional device is reduced to below 90 nm the effects such as large parametric variations and increase in leakage current have caused the I-V characteristics to be substantially depart from the traditional MOSFETs, thus impeding the efficient development and manufacturing of devices at deep sub micro and Nano scales. CNTFETs have received widespread attention, as one of the promising technologies for replacing MOSFETs at the end of the Technology Roadmap [1][3].Reversible logic gates are used in emerging technologies such as quantum computing, optical computing, quantum dot cellular automata as well as ultra low power VLSI circuits. Each reversible gate is of different type and computational complexity, and thus will have a different quantum cost and delay. The computational complexity of a reversible gate can be represented by its quantum cost [4]-[6]. Technology scaling demands a decrease in both supply voltage and threshold voltage (VT) to sustain historical delay reduction, while restraining active power dissipation Scaling of VT however leads to substantial increase in the sub-threshold leakage power and is expected to become a considerable constituent of the

124

total dissipated power. In order to reduce the power dissipation [7]-[9]. Carbon nanotube field effect transistor (CNTFET) based reversible logic gates are proposed in this paper. The paper is organized as follows: Section 2 presents the carbon nanotube field effect transistor technology, Section 3 presents HSPICE model of the intrinsic channel of the CNTFET, Section 4 discusses the design of the logic gates using reversible logics, Section 5 discusses the design of half adder/subtractor using reversible gates in design I, design II, design III methods, section 6 presents the simulation results and discussion, conclusion are contained in section 7. II. CNTFET TECHNOLOGY: Carbon nanotube field effect transistor (CNTFET) are currently considered, one of the main building block for the replacement of MOSFET based CMOS technology. The core of a CNTFET is carbon nanotube. CNTs are the hollow cylinder which is formed from the graphene sheet. Fig(1) shows the structure of graphene sheet. Two atoms in the graphene sheet are chosen, one of which serve as origin. The sheet is rolled until the two atoms coincide. The vector pointing from the first atom towards the other is called the chiral vector and its length is equal to the circumference of the nanotube. Depending on their chiral vector, carbon nanotubes with a small diameter are either semi-conducting or metallic in nature.[10][11]. Fig (1) shows the structure of single walled carbon nanotube (SWCNT). The CNT may be either single walled or multi-walled. Carbon nanotube field effect transistors (CNTFETs) utilize semi conducting singlewall CNTs to assemble electronic devices. A single-wall carbon nanotube consists of one cylinder only, and the simple manufacturing process of this device makes It very promising alternative to today’s MOSFET.

Fig (1) structure of graphene and Single Walled CNT

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

An SWCNT can act as either a conductor or a semiconductor, depending on the angle of the atom arrangement along the tube. This is referred to as the chirality vector and is represented by equation (1), Fig (2) shows the schematic of the CNTFET C = na1 + ma2 (1)

ISSN: 2088-6578

various capacitaces used in model are calculated by equation, (5), (6), (7),(8),(9).[17].

Csg =

Lg  1 Ctot − 2 (1 − β ) Cc   Cox −  2  e ∂VG / ∂∆Φ B 

.

Cdg

(5)

Rd

G

D Cgs

Csg

Ids

Fig (2) Schematic diagram of a carbon nanotube transistor

The diameter of the CNT can be calculated based on the following equation [12]

DCNT =

a n12 + n1n2 + n2 2

π

S (2) Fig (3) HSPICE model of CNTFET

Similar to the MOSFET device, the CNTFET has also four terminals. The current-voltage (I-V) characteristics of the CNTFET are similar to MOSFET’s. The threshold voltage is defined as the voltage required to turn on transistor. The threshold voltage of the intrinsic CNT channel can be approximated to the first order, as the half band gap is an inverse function of the diameter and the equation for threshold voltage is given below [13] Vth ≈

Eg 2e

=

3 aVπ 3 eDCNT

(3) where a = 2.49 Å is the carbon to carbon atom distance, V = 3.033 eV is the carbon π-π bond energy in the tight π

bonding model, e is the unit electron charge, and D the CNT diameter [14]-[16]. As D

CNT

CNT

is

of a (19, 0) CNT is

1.487 nm, the threshold voltage of a CNTFET using (19, 0) CNTs as channels, is 0.293V, The device channel consists of a (19,0), zigzag CNT with a band gap of 0.53 eV and a diameter of 1.5 nm.[13]-[16].

Fig (2) shows the schematic of MOSFET like CNTFET used for designing the reversible gates and adder circuits. It’s HSPICE model is shown in fig (3), model consists of two main parts, current sources and capacitance networks. For semi conducting sub-bands electron current is only considered for the nFET, because the hole current is suppressed by the n-type heavily doped source, drain, and usually is negligible compared to the electron current. The current contributed by the semi conducting sub-bands is given by equation (4) L

km

kl

I D = 2∑∑  J m ,l ( 0, ∆Φ B ) − J m ,l (Vch , DS , ∆Φ B ) 

(4) J m,l ( ) is the current contributed by the substate (m,l).

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Cdg =

Cgs =

Lg  1 Ctot − 2β Cc   Cox −  2  e ∂VG / ∂∆Φ B 

(6)

Lg Cox [CQs + (1 − β ) Cc Ctot + CQs + CQd

4e 2 CQs = Lg ⋅ kT

 E −∆Φ / kT  e( m ,l B ) ∑∑  ( Em ,l −∆ΦB )/ kT km kl  1 + e  M

(7)

L

(

)

  2   (8)

 E −∆Φ +eV / kT 4e2 M L  e( m,l B ch,DS ) CQd = ∑∑ Lg ⋅ kT km kl  1 + e( Em,l −∆ΦB +eVch ,DS ) / kT 

(

III. HSPICE MODEL OF CNTFET:

M

Rs

)

  2   (9)

IV.DESIGN OF GATES USING REVERISIBLE LOGICS: In digital design energy loss is considered as an important performance parameter. Part of the energy dissipation is related to non-ideality of switches and materials. Higher levels of integration and new fabrication processes have dramatically reduced the heat loss over the last decades. The power dissipation in a circuit can be reduced by the use of Reversible logic. irreversible computations generates heat of K*Tln2 for every bit of information lost, where K is Boltzmann’s constant and T the absolute temperature at which the computation performed. Reversible computing was started when the basis of thermodynamics of information processing was shown that conventional

125

ISSN: 2088-6578

Yogyakarta, 12 July 2012

irreversible circuits unavoidably generate heat because of losses of information during the computation Reversible logic plays an important role in low power digital design due to their ability to reduce the power dissipation in the circuits. Reversible are circuits (gates) that have one-to-one mapping between vectors of inputs and outputs; thus the vector of input states can be always reconstructed from the vector of output states. Using traditional irreversible logic gates such as AND or multiplexer leads inevitably to energy dissipation in a circuit, regardless of the realization of the circuit. However, if the Moore’s Law will continue to be in effect, energy losses due to non-reversible design would become essential in 2020 or earlier. Moreover, quantum logic is reversible, and the problem of searching for efficient designs of quantum circuits includes as its subproblem the problem of synthesis using classical reversible gates, Most of the gates used in digital design are not reversible for example NAND, OR and EXOR gates. [18] The simplest Reversible gate is NOT gate and is a 1*1 gate. Controlled NOT (CNOT) gate is a 2*2 gate. There are many 3*3 Reversible gates such as F, TG, PG and TR gate. The Quantum Cost of 1*1 Reversible gates is zero, and Quantum Cost of 2*2 Reversible gates is one. Any Reversible gate is realized by using 1*1 NOT gates and 2*2 Reversible gates, such as V, V+ and FG gate which is also known as CNOT gate. The V and V+ Quantum gates have the property given in the Equations (9),(10),(11) V * V = NOT (9) V * V+ = V+ * V = I

CITEE 2012

Fig (5) Peres gate C. Toffoli Gate The 3*3 Reversible gate with three inputs and three outputs. The inputs (A, B, C) mapped to the outputs (P Q,R) is as shown in the Figure (6). Toffoli gate is one of the most popular Reversible gates and has Quantum Cost of 5. It requires 2V, 1 V+ and 2 CNOT gates.

Fig (6) Toffoli gate D. Fredkin Gate Reversible 3*3 gate maps inputs (A, B, C) to outputs (P, Q,R ) having Quantum cost of 5 , Fredkin gate is shown in Figure (7)

(10)

V+ * V+ = NOT (11) The Quantum Cost of a Reversible gate is calculated by counting the number of V, V+ and CNOT gates [19]. A. Feynman / CNOT Gate The Reversible 2*2 gate with Quantum Cost of one having mapping input (A,B) to output (P ,Q) is as shown in the Figure (4) Fig (7) Fredkin gate E. TR Gate The gate has 3 inputs and 3 outputs having inputs (A, B, C) mapped to the outputs (P, Q, R). TR gate is shown in Figure (8).

Fig (4) Feynman gate B. Peres Gate The three inputs and three outputs i.e., 3*3 reversible gate having inputs (A, B, C) mapping to outputs (P , Q, R ). Since it requires 2 V+, 1 V and 1 CNOT gate, it has the Quantum cost of 4. The Peres gate is shown in the Figure (5)

126

Figure (8) TR gate

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

V. Design of Half Adder/Subtractor A. DESIGN – I : Reversible half Adder/Subtractor–Design I is implemented with four Reversible gates of which two F and two FG gates is shown in the Figure 9. The numbers of Garbage outputs are three represented as g1 to g3, Garbage inputs are two represented by logical zero and Quantum Cost is twelve as it requires two FG gates each costing one and two F gates each costing five each.

Fig (9 ) Reversible half adder/subtractor-design-I

B. Design – II: Reversible half Adder/Subtractor–Design II is implemented with three Reversible gates of which two are FG gates with each having Quantum cost of one and a TR gate with six Quantum cost is as shown in the Figure 10. The number of Garbage outputs is two i.e., g1 and g2, Garbage inputs one denoted by logical zero and total Quantum Cost is eight.

ISSN: 2088-6578

VI.SIMULATED RESULTS AND DISCUSSIONS The comparison of Half Adder/Subtractor in Design I, Design II and Design III using reversible gates in terms of the number gates, number of Garbage inputs/outputs , average power and power delay product is shown in the Table 1,Table 2 and Table 3 respectively. Simulated output waveform is shown in fig (12). From the simulated results it shows that design III gives the better performance than design II and I. average power of the logic circuit is 0.038 µW in design III where as 0.04 µW and 9.7 µW in design II and III respectively in CNTFET based design . The power delay product of design III is 38 aJ. This is less when compare to design II and I. The number of Reversible gates required for Design III is only 3 as compared to 3 and 4 in the cases of Design II and I respectively, which indicates that the improvement of 33.3 % compared to Design I. The Garbage outputs are 3 in the case of Design I, whereas 2 in the case of Design II and Design III, The Garbage inputs are 2 in the case of Design I and one in case of Design II and Design III, gives 200% improvement in Design III. The simulated output waveform of the adder/subtractor is shown in fig (12). The results shows that circuit will act as either adder or subtractor depends upon the control input. When control input is high the circuit will act as a half subtractor and when the control input is zero circuit act as adder circuit.

Fig (10) Reversible half adder/subtractor-Design-II C. Design III: Reversible half Adder/Subtractor–Design III is implemented with three Reversible gates of which two are FG gates each having Quantum cost of one and a PG gate with Quantum cost four is as shown in the Figure 1. The numbers of Garbage outputs is two i.e., g1 and g2, Garbage inputs are one denoted by logical zero and Quantum Cost is six.

Fig (11) Reversible half adder/subtractor-Design-III

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Fig (12) simulated results of Reversible half adder/subtractor Table1. Comparison of Reversible Half Adder/Subtractor Using Design I

DESIGN - I CMOS CNTFET No Of Reversible Gates Garbage Outputs Garbage Inputs Average Power(µW) Power Delay Product

4

4

3

3

2

2

21.2

9.7

1.05 aJ

49 aJ

127

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Table 2.Comparison of Reversible Half Adder/Subtractor Using Design II DESIGN – II CMOS

CNTFET

No Of Reversible Gates

3

3

Garbage Outputs

2

2

Garbage Inputs Average Power(µW)

Power Delay Product

1

1

195

0.04

990 aJ

39 aJ

Table 3 Comparison of Reversible Half Adder/Subtractor Using Design III CMOS No Of Reversible Gates Garbage Outputs Garbage Inputs Average Power(µW)

Power Delay Product

DESIGN - III CNTFET

3

3

2

2

1

1

74

0.038

370 aJ

38 aJ

VII. CONCLUSION The carbon nanotube field effect transistors are used to design reversible gates and half Adder/Subtractor are designed using Reversible gates. In this paper, CNTFET are introduced to design half Adder/Subtractor unit. The Design I, Design II and Design III are used to design half Adder/Subtractor. The Design III implementation of half Adder/Subtractor has better performance as compared to Design I, Design II and existing design, in terms of number of gates used, Garbage inputs/outputs and average power and power delay product, hence can be used for low power applications. CNTFET has been proposed in this paper to achie high-speed operation with low power .As the threshold voltage of the CNTFET can be easily controlled by changing the chirality vector of the CNTs, digital circuit can be designed for the required threshold voltage. In future, the design can be extended to full Adder/Subtractor unit and also for low power Reversible ALUs, Multipliers and Dividers. The performance of proposed design is better than the CMOS based design. REFERENCES [1] Kyung Ki Kim “ Hybrid CMOS And CNFET Power Gating in Ultralow Voltage Design” IEEE TRANSACTIONS ON Nanotechnology, Vol. 10, No. 6, November 2011 [2] Arijit Raychowdhury, Ali Keshavarzi, Juanita Kurtin,Vivek De, And Kaushik Roy, “Carbon Nanotube Field-Effect Transistors For High-Performance Digital Circuits –DC Analysis And Modeling Toward Optimum Transistor Structure” IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 53, NO. 11, NOVEMBER 2006 [3] A. Rahman, Jing Guo, S. Datta, M.S. Lundstrom, Theory of ballistic nanotransistors,”Electron Devices,IEEE Transactions on, vol. 50, no. 10, pp. 1853 - 1864, Sept. 2003

128

CITEE 2012

[4] R. Landauer, “Irreversibility and Heat Generation in the Computing Process”, IBM J. Research and Development, Vol.5, pp. 183-191, 1961. [5]. C. H. Bennett, “Logical Reversibility of Computation”, IBM Research and Development, Vol.17, pp. 525-532, 1973. [6]. V. V. Shende, A. K. Prasad, I. L. Markov, and J. P. Hayes, “Reversible Logic Circuit Synthesis”, In ICCAD, San Jose, California, USA, pp. 125-132, 2002. [7] J. Wildoer, L. Venema, A. Rinzler, R. Smalley, and C. Dekker, "Electronic structure of atomically resolved carbon nanotubes," Nature, vol. 391, pp. 59-62, January 1998. [8] A. Bachtold, P. Hadley, T. Nakanishi, C. Dekker, "Logic Circuits with Carbon Nanotube Transistors," Science, vol. 294, no. 9 pp. 1317-1320, November 2001 [9] Z. Chen, J. Appenzeller, Y.-M. Lin, J. Sippel-Oakley, A.G. Rinzler, J.Tang, S.J. Wind, P.M. Solomon, P. Avouris, "An Integrated Logic Circuit Assembled on a Single Carbon Nanotube," Science, vol. 311, no.5768, p. 1735, March 2006 [10] J. Appenzeller, “Carbon Nanotubes for High- Performance Electronics Progress and Prospect,”Proceedings of the IEEE Volume 96, Issue 2, pp.201–211, Feb. 2008 [11] A. Raychowdhury, S. Mukhopadhyay, and K. Roy, “A circuit Compatible model of ballistic carbon nanotube FETs,” IEEE Trans.on CAD, vol. 23, pp.1411-1420, 2004 [12] S.J. Wind, J. Appenzeller, Ph. Avouris, “Lateral scaling in carbon- nanotube field-effect transistors,” Phys. Rev. Lett., vol.91, pp.058301-1-058301-4, Aug. 2003. [13] R. Sordan, K. Balasubramanian, M. Burghard, K. Kern, "Exclusive- OR gate with a single carbon nanotube," Appl. Phys. Lett., vol. 88, 053119, 2006 [14] R. Saito, G. Dresselhaus, and M. S. Dresselhaus, “Physical Properties of Carbon Nanotubes,” Imperial College Press, London, 1998 [15]. M. Miller, and G. W. Dueck, “Spectral Techniques for Reversible Logic Synthesis”, In 6th International Symposium on Representations and Methodology of Future Computing Technologies, pp. 56-62, 2003. [16]. Md. Saiful Islam, "BSSSN: Bit String Swapping Sorting Network for Reversible Logic Synthesis",Journal of Computer Science, Vol. 1, No. 1, pp. 94-99, 2007. [17] Jie Deng, Student Member, IEEE, and H.-S. Philip Wong, Fellow,IEEE, “A Compact SPICE Model For Carbon-Nanotube Field-Effect Transistors Including Nonidealities and Its Application” IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 54, NO. 12, DECEMBER 2007 [18] Rangaraju H G “Low Power Reversible Parallel Binary Adder/Subtractor ” International journal of VLSI design & Communication Systems (VLSICS) Vol.1, No.3, September 2010 [19].A. Peres, “Reversible Logic and Quantum Computers”, Phys. Rev., pp. 3266-3276, 1985. V. Saravanan was born in vellore, Tamil nadu, India in 1981. He received his bachelor degree in Electronics and communication Engineering from Ganadhipathy Tulsi’s Engineering college, vellore, TamilNadu in the year 2004 ,ME in Applied Electronics from Sathyabama University, Chennai in the year 2007. He is presently a research scholar in the Department of Electronics and communication Engineering, Sathyabama University. He has 8 years of teaching experience. He is a life member of ISTE Dr.V.Kannan was born in Ariyalore, Tamil nadu, India in 1970. He received his Bachelor Degree in Electronics and Communication Engineering from Madurai Kamarajar University in the year1991, Masters Degree in Electronics and control from BITS, Pilani in the year 1996 and Ph.D., from Sathyabama University, Chennai, in the year 2006. His interested areas of research are Optoelectronic Devices, VLSI Design, Nano Electronics, Digital Signal Processing and Image Processing. He has 130 Research publications in National International Journals / Conferences to his credit. He has 20 years of experience in teaching and presently working as Principal of Jeppiaar Institute of Technology, Chennai, India, He is a life member of ISTE.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Preliminary Design for Teleoperation systems under Nonholonomic Constraints Adha I. Cahyadi1,2 , Rubiyah Yusof1 , Bambang Sutopo2 , Marzuki Khalid1 and Yoshio Yamamoto3 1

Center of Artificial Intelligent and Robotics (CAIRO) International Campus, Universiti Teknologi Malaysia Jalan Semarak, Kuala Lumpur, Malaysia 54100 Tel : +60-3-2615-4456; E-mail: [email protected]

2

Department of Electrical Engineering and Information Technology Faculty of Engineering, Universitas Gadjah Mada Jalan Grafika 2, Yogyakarta Email: [email protected] 3

Department of Precision Engineering School of Engineering, Tokai University 1117 Kitakaname, Hiratsuka, Kanagawa, Japan Email: [email protected]

Abstract— In this paper we emplhasize in the nature of constrained non-holonomic manipulator based teleoperation systems. At the fist stage, we derive the modelling of the constrained manipulator either under constraints or without constraint. Then we propose a controller based on feedback linearization. So far no verification is made either using simulation study or experiment. However, it should be available in the next submission.

Fig. 1.

Illustration of a teleoperation system

I. I NTRODUCTION Teleoperation has been given a lot of attentions especially for applications in hazardous environment or applications that are almost impossible to be done by human beings. For example, application to deal with radioactive material in nuclear power plants [7], [17], to work in hazardous place in mining industries [4] or in space robots [1], to enhance the perception in virtual reality [12] or in unmanned vehicle operations [15], to perform complex and precise job in medical applications [11], in cell/micro-organism applications [8] or in semiconductor industries [20] and so on. In a common setting, in a teleoperation system as shown in Fig. 1, the operator will exert a force on the master manipulator which in turn, results in a displacement or velocity that is transmitted to the slave side as the order or command. In order to sense the manipulated object, some informations have to be returned from the slave side to the operator side. These information could be distance measurement, velocity measurement, force measurement or their combination. By sending the information back to the master side, the human operator will be able to feel what happened in the environment for example tactile senses. However, it may cause instability in the system if the model of the environment is

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

not exactly known or the delay presents in the communication channel. These problems have been the main challenges faced by researchers for many years. Another problem that is also considered is about how to provide the capability to give the operator the feeling of what happened in the remote environment which is known as transparency. As stated by many researchers, stability and transparency are usually conflicting [10] and many works failed to compromize these two situations [5]. Two research papers [2], [5] stated that for more than 50 years of history, the goals of teleoperation systems usually can be boiled down into two categories: Stability of the closed-loop system and the ability of the sense of telepresence known as transparency. Unfortunately, in most of works usually, researchers emphasize the stability issues such as in [6], [9], [18], [21] rather than to give equal treatment to the transparency, or vice versa to emphasize on the transparency issue rather than the stability issue such as in [9], [14], [23]. This is because of the what so called conflicting nature between stability and transparency. One of famous example about this conflicting behavior is the use of wave variables that results to wave reflection (see [9], [21]). Some efforts has been done but not really solve this conflicting behavior.

129

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Therefore, in this work a new setting of teleoperation system is proposed. Using this framework, in order to achieve stability, transparency and performance, the controller will be divied into two: the nocontact and contact type. We will show that using this framework we can design either the stability and transparency in separate manner. II. DYNAMICS , OBJECTIVE AND PERFORMANCES A. Dynamic equation and performance As shown in Fig. 1, there are four elements in the teleoperation systems: human operator, master manipulator, slave manipulator and the interaction with the environment. A manipulator systems usually modelled as nonlinear as shown in [16], and [19] as follows D(q)¨ q + C(q, q) ˙ q˙ + G(q) = F + u

(1)

where q ∈ Rn is generalized vector coordinate, F ∈ Rn could be forces from human operator or from interaction with the environment. As proposed in [shardi shiourspor], adaptive controller regardless of its further complexcity and stability issue, the above nonlinear equation can be transformed in linear form as ¨m + bm x˙ m + km xm = um + fh mm x ms x ¨s + bs x˙ s + ks xs = us − fs

and force control altogether in the same time. Unfortunately, in the literature there are few or no such a work dealing with this condition. This could be because many of the researchers used the very simplified linear model that of course will lost the geometry of the manipulator. From the robotics point of view, in this situation usually we are dealing with the constrained manipulator problem [13]. Therefore, the model of the manipulator either master or slave need to be further revised. In order to generalize, we aer going to assume that our manipulator subject to both holonomic and nonholonomic constraints. Let us derive the constrains equations first then followed by the motion equations. In this paper, we generalized the approach used for mobile robot control proposed by Yamamoto and Yun [?]. In order to be general, we are going to assume that the slave manipulator constrains can be written in Pfafian constrain as A(q)q˙ = 0.

(4)

Utilizing Frobenius Theorem, we can differentiate the type of contraints. Assume that some of the constraints are nonholonomic, while the other is holonomic, then the non holonomic constraint can be written as

(2)

Ar (q)q˙ = 0.

(3)

Under this constraints and redefining q = [x0 , y0 , θr , θl ]T , finally, we can find the equation of motions represented as

where the subscript m, s denote respectively master and slave, u is the control signal given to the actuator of the manipulator and fe , fh stand for forces exerted by human operator and environment, respectively. Therefore, we can use either (1) or (2) depend on our interest in spite of there are some critiques on the above linear model such as robustness and implementation difficulty. In teleoperatin systems usually researchers interested to have the following objective 1) Force tracking: the master manipulator should be able to track the interaction force transmitted from the slave side. 2) Position tracking: the slave manipulator should be able to track the position of the master manipulator. Both of those objectives are usually called as transparency [?]. Usually it is very hard to achienve the perfect transparency. Moreover, in many cases transparency has conflicting behavior with the stability of the closed-loop system. Because of this difficulty the transparency is translated impedance matching as in [?], [3], [23] etc. However, for nonlinear case the definition of impedance becomes blurred. B. Constrained dynamics equation As mentioned before, being used in so many applications, however, the first objective is seemed to be not enough. For instances, when the teleoperation is used for surgery most of the time it has to perform moving the tool in a direction while pushing an other direction for cutting tissues, gland and so on. In this case the teleoperation system has to do position

130

CITEE 2012

M (q)¨ q + C(q, q) ˙ + G(q) = Eτ − ATr (q)λ

(5)

(6)

where the inertia matrix M (q), Coriolis and centrifugal matrix C(q, q), ˙ gravity matrix G(q), E and the parameters inside them are omitted to save space. In the above equation, λ is Lagrange multiplier that has to be found by solving the Lagrange-d’Alembert equations. C. State space realization The nonholonomic matrix S due to the holonomic constraint can be reduced as Sr = [s1 (q), s2 (q)] whose columns are spanned by the null space of Ar (q). From the constrain equation (5), the configuration space velocity has to be spanned by the columns of Sr , i.e. q˙ ∈ span {s1 (1), s2 (q)}. Therefore, there exists a smooth vector η = [η1 η2 ]T such that q˙ = Sr (q)η, by differentiating then performing simplification q¨ = Sr (q)η˙ + S˙ r (q)η

(7)

by premultiplying with SrT , the motion equation (6) is in standard form as follows SrT M Sr η˙ + SrT M S˙ r η + SrT C = τ.

(8)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Finally, the state vector manipulator can be formed by combining q with η to define x, and uing the fact from (6) and (7) as follows   q˙ x˙ = = f (x) + g(x)τ (9) η˙   Sη where f (x) = ˙ + S T C) −(S T M S)−1 (S T M Sη  g(x) =

and

0 (S T M S)−1



Compared to others, [Zhu and Salculdean, 2000] has a good way to represents the environment. We are going to simplify this using using mass-spring-damper system as fs = δnc (δr (me x ¨s + ke xs + fe∗ ) + (1 − δr )fe )  δnc = δr =



1 0 1 0

in (1) for both master and slave manipulators. Let us define the error tracking as ye = ys − ym . By assumming that xm = ym , xs = ys , we can utilize a simple servo control scheme to stabilize the system The simplest controller for this system could be a simple proportional controller or PID controller as follows De (qe )¨ qe + C(qe , q˙e )q˙e + G(qe ) = F s − F + u

(11)

u = Kp ye + Kd y˙ e + Ki yi

(12)

y˙ i = ye

(13)

.

D. Dynamics of manipulator and interaction modelling

where

ISSN: 2088-6578

(10)

In contact Free motion Rigid contact . Free motion

B. Nonholonomic control design Under the constrains, the design of the contoller can be extremely sopisticated. Therefore, we are going to rely on feedback linearization scheme. In this paper, we are going to follow closely Yamamoto and Yun [22] but in more general perspective. Since the slave side is physically constrained, we only need to make the master side to be virtually constrained. Therefore, for the master side then controller is proposed as follows. First, to simplify the discussion, first, the following state feedback is applied (see Fig. 2).

In this model, when δr = 0 we are assumed in a flexible (or soft) environment that is defined as when its impedance is significantly lower than the arm of the manipulator such as tissues, soft spring or sponge. III. T ELEOPERATION D ESIGN In this section we are going to assume that the master manipulator output is available for measurement. Moreover, in this work we are going to assume that there is no time delay between master and slave side. Thus the information from both side can be tranferred instataneously. In order to fulfill the design objectives we are going to design the controller based on these condition 1) No contact, in this scheme the teleoperation systems do tracking control, i.e., limt→∞ ys (t) − ym (t) = 0. 2) Contact with soft environment, in this case the teleoperation perform the active force control, i.e, limt→∞ ys (t)− ym (t) = 0 and limt→∞ fe (t) − fm (t) = 0. However, for saving space we are going to skip the design of this controller. 3) Constrained, in this case the slave manipulator will be constrained in either contact with hard objects, under workspace constraint, or other contraints, i.e., the movement of manipulator is restricted by (5). A. Tracking control design In this section, from Figure 1 the relationship between the signal is simple. Moreover, even if there is delay in the telecommunication channel, it will not affect the stability of the system. Here we are going to use the unconstrained model

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Fig. 2.

Extraction of x˙ = f 1 (x) + g 1 (x)μ

τ =α1 (x) + β 1 (x)μ ˙ + S T V ) + (S T M S)μ =(S T M Sη where μ is a new control law, in order to get x˙ = f 1 (x) + g 1 (x)μ where

f 1 (x) = [Sη 0]T

(14)

g 1 (x) = [0 I2×2 ]T .

It is straight forward to say that (14) is not input to state linearizable by smooth state feedback due to not being involutive. However, it may be input output linearizable. Yamamoto and Yun [?] proved that controlling the center point P0 is still not possible by using the static output feedback. However, when we use a reference point in front of the mobile robot, input-output linearization becomes possible. Nevertheless, the internal dynamics when the robot move backward becomes unstable. However, this can be coped by choosing a suitable path planning.

131

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Let us choose the output equation at the XY coordinates of the appropriate reference point Pr as     x0 + xr cos φ − yr sin φ x0 + L cos φ = y = h(x) = y0 + L sin φ y0 + xr sin φ + yr cos φ (15) To find the linearization law between the control input and output, we differentiate the output (15) with respect to time until the desired input law μ appears. After the second timedifferential of y, the input law μ appears as in ˙ Φ(x)η˙ + Φη ˙ = Φ(x)μ + Φη

y¨ =

(16)

where Φ(x) acts as a decoupling matrix for (15) described as:  r  cos φ − Lc sin φ 2r cos φ + Lc sin φ 2 Φ(x) = r (17) r 2 sin φ − Lc cos φ 2 sin φ − Lc cos φ where L and Pr are defined before as the distance between the reference point and the center of the mobile robot, respectively, r . Therefore, the input linearization law is given while c = 2b by ˙ μ = Φ−1 (x)(u − Φ(x)η)

(18)

From (16) and (18), we obtain: y¨ = u

Fig. 3.

,

u = [ u1

u2 ] T

(19)

Output linerization scheme

Remark 3.1: The feedback linearization scheme will prevent the operator for moving in such a way to obstruct the constrains. In teleoperation system, sometimes it is required that either the master and the slave side has to be passive. Therefore, in the slave side, the new input u is chosen such that the system will mimic a spring, mass and damper system which is clearly passive system. Equation (19) is our new linearization between the control input and output. Moreover, for stabilizing the output y, the following equation is used: us = y¨ˆs + kDs (yˆ˙ s − y˙ m ) + kP s (ˆ ys − y m )

(20)

where y¨ˆ, yˆ˙ and yˆ are the targets that is transmitted from the master side.

132

CITEE 2012

It should be noted that, from (18) we know that the decoupling matrix has to be invertible, namely, Φ has to be a non2 singular matrix. Also from (17) we obtain det(Φ) = − r2b L. Therefore, in order to be invertible L has to be never zero. As it could be a major defect, we have to assign a scenario in order to prevent L from being zero. This will be treated in the next section. IV. C ONCLUSION After more than five decades being used in so many applications, however, the researchers in teleoperation systems never aware about the constraint especially non-holonomic constraints. Therefore, here we proposed a teleoperation system that verify these constrains. In this paper, after designing a simple scheme for tracking control, the control scheme under constraints undergoes the feedback linearization. In this case we only need to make sure that the master side will never exits the acceptable region of motion by the so called virtual constraints. Unfortunately no results from either experiments or simulations are made as we are going to do it once after the acceptance of the paper. R EFERENCES [1] J. Albus. NASA/NBS standard reference model for telerobot control system architecture (NASREM). Technical report, NASA, Gaithersburg, Maryland., 1987. Technical Note 1235, NIST. [2] P. Archara and C. Melchiori. Control schemes for teleoperation with time delay. Robotics and Autonomous Systems, 38(5):49–64, 2002. [3] J. Edward Colgate. Power and impedance scaling. In Proceedings of 1991 IEEE International Conferences of Robotics and Automation, pages 2292–2297, California, USA, 1991. [4] D. Hainsworth. Teleoperation user interfaces for mining robotics. Autonomous Robots, 11(1), 2001. [5] Peter F. Hokayem and MarkW. Spong. Bilateral teleoperation: An historical survey. Automatica, 42:2035 –2057, 2006. [6] Chang-Chun Hua and Peter X. Liu. Convergence analysis of teleoperation systems with unisymmetric time delays. IEEE Transactions on Circuit and Systems, 56(3):240–244, 2009. [7] Chetan Kapoor and Delbert Tesar. Integrated teleoperation and automation for nuclear facility cleanup. Industrial Robot, 33:469–484, 2006. [8] A. Kortschack, A. Shirinov, T. Truper, and S. Fatikow. Development of mobile versatile nanohandling microrobots: design, driving principles, haptic control. Robotica, 23:419–434, 2005. [9] D. A. Lawrence. Stability and transparency in bilateral teleoperation. IEEE Transactions on Robotics and Automation,, 9(5):625–637, 1992. [10] D. A. Lawrence. Stability and transparency in bilateral telemanipulation. IEEE Transaction on Robotics Automation, 9(5):624–637, 1993. [11] A. J. Madhani, G. Niemeyer, and J. K. Salisbury Jr. The black falcon: A teleoperated surgical instrument for minimally invasive surgery. In in Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, volume 2, pages 936–944, 1998. [12] P. Milgram, S. Zhai, and D. Drascic. Applications of augmented reality for human-robot communication. In in Proceedings of IEEE International Conference on Intelligent Robots and Systems, volume 3, pages 1467–1472, Yokohama, 1993. [13] Richard M. Murray, Zexiang Li, and S. Shankar Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1 edition, March 1994. [14] G. Niemeyer and J. J. E. Slotine. Stable adaptive teleoperation. IEEE Journal of Oceanic Engineering, 16(1):152–162, 1991. [15] J. Parrish, D. Akin, and G. Gefke. The ranger telerobotic shuttle experiment: implications for operational eva/robotic cooperation. In in Proceedings of 30th International Conference on Environmental Systems, Toulouse, France, 2000. [16] S. Salcudean. Control for teleoperation and haptic interfaces, chapter Control Problems in Robotics and Automation LNCIS230, pages 51–66. Springer, New York, 1998.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

[17] Stephen Sanders. Remote operations for fusion using teleoperation. Industrial Robot, 33:174–177, 2006. [18] C. Secchi, S. Stramigioli, and C. Fantuzzi. Dealing with unreliabilities in digital passive geometric telemanipulation. In in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, volume 3, pages 2823–2828, 2003b. [19] Shahin Sirouspour and Ali Shahdi. Model predictive control for transparent teleoperation under communication time delay. IEEE Transactions on Robotics 22, NO. 6, DECEMBER 2006, 22:1131–1145, 2006. [20] M Sitti and H Hashimoto. Teleoperated touch feedback from the surfaces at the nanoscale: Modeling and experiments. IEEE/ASME Transactions on Mechatronics, 8(2):287–298, 2003. [21] Stefano Stramigioli. Sampled data systems passivity and discrete porthamiltonian systems. IEEE Transactions on Robotics, 21(4):574–587, 2005. [22] Y Yamamoto, R Konishi, Y Negishi, and T Kawakami. Prototyping ubiquitous micro-manipulation system. in Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pages 709–714, 2003. [23] Y. Yokokohji and T. Yoshikawa. Bilateral control of master-slave manipulators for ideal kinesthetic couplingformulation and experiment. IEEE Transactions on Robotics Automation, 10:605–619, 1994.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

133

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

PACS Performance Analysis In Hospital Radiology Unit Sugeng Riyadia,b, Indra Riyantob a

b

Tawada Healthcare, Rukan Permata Senayan A18-19 Jakarta, Indonesia Department of Electrical Engineering, Faculty of Engineering, Budi Luhur University

e-mail: [email protected]

Abstract—Medical imaging science for healthcare solution always grows and develops as well as information technology development continuously. Medical imaging used in radiology department has important job for good patient service in disease diagnostics or medicine treatment. The precision and accuracy in medical imaging also used in another diagnostics and medical treatment department. Radiology department was supported Picture Archiving Communication System (PACS). PACS is an integrated system that has many kind of function, such as image storage, image archiving, image accessing and manipulation. Data format used in the PACS is Digital Imaging and Communication in Medicine (DICOM), Dicom is standard for data storage, data archiving, data printing and information transmission in healthcare world. PACS technology in Radiology department can fix and problem elimination about time as major problem in diagnostics because medical diagnostic need accurate and precision. PACS technology implementation also developed as part of tele-radiology in Indonesia for alternative step to public healthcare service involvement and future medical science. Teleradiology used for physicians conference from the other site to discussing and consultation medical diagnostics, more physician can access same patient medical image in the same time together. Patient service will be smarter using PACS technology systems in medical imaging science. Keywords-PACS, DICOM, Medical Digital Imaging, teleradiology

I.

INTRODUCTION

Radiology Unit at the hospital in charge of doing a medical diagnosis to the patients so that medical treatment can then be executed in accordance with the results radiological diagnosis of the unit. The process begins with the diagnosis of patients coming to the radiology unit, at the registration will be processed administratively for medical records or filing requirements and payments. After the registration process completion, medical diagnosis will be taken with the equipment in accordance with diagnosis from the doctor. The result of the diagnosis is usually a form of medical images film that has been interpreted by a radiological doctor. The results will be consulted on sending doctors who treat patients so that the next treatment measure is dependent on the outcome of the resulting medical diagnosis. Some constraints that often occur in this process are: -

134

In the administrative process, patient registration will be repeated because previously the patient had to register when they first enter the hospital and will reregister in the radiology unit

-

The diagnosis will be long when the radiologist not in place.

-

In the event of loss of medical images or movies such patients should be re-photographed, where it will interfere with the comfort and safety for patients against radiation hazards posed by medical diagnostic equipment.

With technological developments in radiology, PACS became one of the alternatives to be able to perform the storage or archiving, access, image manipulation, printing and transmission of medical images. PACS is one component in a health information system that has a high technological complexity. PACS implementation-related Radiology Information System (RIS) which can connect a variety of diagnostic tools in radiology installation and is expected to improve the quality of patient care. II.

HOSPITAL INFORMATION SYSTEM

A. Picture Archiving Communication System (PACS) Computerized based technology can be integrated with a network system that can connect between computers, so as to provide services that can be utilized in radiology services and a new technology in the health field. PACS is one of the components in healthcare information systems that have a high technological complexity. PACS implementation is directly related to the system information or RIS radiology and hospital information system or HIS is expected to improve the quality of patient services and support the efficiency of hospitals. PACS is a technology development of medical maging equipment in storage, archiving, and tele-radiology workstation. PACS and RIS can be combined into an image management, archiving, and communications systems. The incorporation of PACS and RIS technology will result in a quick and easy access to all relevant data from the patient, does not depend on the location of data storage. RIS component associated with the flow of administrative information such as patient reception, scheduling examination room and equipment finance. PACS has two major benefits, namely, Replacing images / film as a record in the form of digital (softcopy) and as a media remote access or which may increase the teleobservation capabilities (viewing), report (reporting) and remote diagnostics (telediagnosis). This allows the practitioner in the the medical field at different locations can access the same information. PACS is generally used to store data and images from all modality into a central database, so users PACS will easily get back all the images and data. A PACS network

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

consists of a central server that stores the image database associated with the acquisition of equipment of medical images in radiology unit through a network of Local Area Network (LAN) or intranet. Patient data from the registration will be sent as the demographics of the patient to the medical image acquisition equipment to be used, making patient data entry will be scheduled as new patients to do image acquisition. After the image acquisition process is completed, the results will be sent to PACS or printed on a laser imager. Existing data can be viewed on the PACS (browsing) through working as a PACS workstation client at can access the PACS server as its data center. Interconnection with the PACS in radiology unit can be seen in Figure 1.

Figure 1. Diagram of a working system with PACS in the radiology unit. [1]

B. Radiology Information System (RIS) Digital radiology department has two main components of the information management system or RIS radiology and imaging system (Digital Imaging). RIS is part of the Hospital Information System (HIS) or the Clinical Management System (CMS). These systems combined with electronic medical record system (Electronic Medical Record / EMR) will be a documentation system that filmless radiology hospital (without film) and paperless (no paper). Radiology Unit is part of a functioning hospital to see a picture of human anatomy through diagnostic tools, the resulting image is a projection radiographs that convert electrical signals into images on a radiological film media. Drastic changes that occurred in the area of radiology is the discovery of a new method of imaging the anatomy and physiology that is digitally Computed Tomography (CT) and Direct Digital Radiography (DR). C. Digital Image and Communication in Medicine (DICOM) DICOM is the third version of the standards developed by the American College of Radiology (ACR) and the National Electrical Manufacture Association (NEMA). In the beginning the year 1980 which is almost impossible for any diagnostic equipment CT Scan and Magnetic Resonance Imaging (MRI) from different manufacturers produced a recognizable image with each other. The radiologist and medical doctors want to use these images in determining the dose of radiation therapy. In 1983, ACR and NEMA standards work together to form committees and produce the first standard ACR / NEMA 300, although still needs much improvement.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

DICOM specifies an Information Object Definition (IOD) as the patient information that is part of the imaging process, such as CT image IOD for computing tomography, and magnetic resonance imaging for the IOD. IOD contain identification information that consists of the identity and the identity of Patient Study, which both have an independent modality and contains data about the patient such as name, age, etc. and examination as the examiner's name, doctor sender, type of examination. Series identities that make up the group contains a set of imagery and logic are logically defined and created by two entities of information in the form of identity and the identity of the Frame of Reference Equipment. Frame of Reference provide the identity of the spatial information in a sequence of image and identity Equipment store information about the imaging modalities such as manufacturer name, device type and software revision number. The function of DICOM is a standard format image data in the medical world so that all radiology equipment can be integrated easily and become a means of interfacing to connect all equipment in the radiology unit. DICOM data object contains a number of attributes that include a name, ID number and some other information, contains a special attribute of the image pixel data. Single DICOM image object can contain one attribute containing pixel data. For some commonly used image modality and attribute contains a single frame images. Pixel data can be compressed using some standards like JPEG, JPEG Lossless, JPEG2000 and Run length encoding (RLE). Compression can also be done on the whole object, but the process is rarely implemented.

III.

EQUIPMENT CONFIGURATION

PACS is the process of storage and processing of diagnostic images electronically and can be accessed by many users are integrated in the network. The presence of PACS in Radiology Unit acts as a data center where every medical image acquisition device to be connected in one network. Patient data from the registrar will send patient demographics to the medical image acquisition equipment to be used; making patient data entry will be scheduled as new patients to do image acquisition. After the image acquisition process is completed, the results will be sent to PACS or printed on a laser imager. Existing data can be viewed on the PACS (browsing) through Workstation, worked as a PACS client workstation that can access the PACS server as its data center. The PACS client connected to the PACS server in intranet network involving a LAN and WEB server. Web server used to serve the internal user requests for data and images. Intranet is a closed network because it is used only to users who are connected in a LAN. PACS client can be placed anywhere as needed and does not require any special specification. PACS Server is a media intermediary / liaison and data and image storage media. Information from the RIS system can be directly related to the image acquisition modality in scheduling patients. Any patient who has registered with the RIS will be scheduled for the

135

ISSN: 2088-6578

Yogyakarta, 12 July 2012

acquisition of medical images according to the request of the sender or the physician in accordance with the early diagnosis of these patients; patients usually come to the radiology department with a letter of introduction or a referral from a doctor sender. Client workstations and PACS can only access patient data via the PACS server for each modality image acquisition will send the data and medical images to the PACS server. While the process of printing from any modality can directly send images to a laser imager without going through the PACS Server. The process of printing images on a PACS workstation fixed through the data center and workstation have the ability to change and manipulate the image before it is printed on a laser imager according to the needs of patients diagnosis. PACS Server in the radiology unit has a specification on the network as the following: -

Equipment Nam e

IP Address Subnet Mask Gateway AE Title Port Number

: : : : : :

PACS Server 172.31.30.134 255.255.255.128 172.31.30.130 HRU 1.FIR 2104

A. Radiology Equipments Image acquisition equipment in the radiology unit of a hospital consists of several tools that have functions different image acquisition devices connected in a common LAN network configuration with the communication protocol TCP / IP. IP Address of each device must be set in one class. Some of the equipment and its specifications are: 1.

Computed Radiography (CR)

Kodak Computed Radiography Ellite DirectView is a product of Carestream Healthcare companies that have a large image data on the tapes 35cm x 43cm is 10 Mbytes. CR on the LAN network equipment has the following specifications: -

2.

Equip. Nam e

IP Address Subnet Mask Gateway AE Title Port Number

: : : : : :

Com puted Radiography (CR) 172.31.30.134 255.255.255.128 172.31.30.129 KOCR 5040

Computed Tomography Scanning (CT scan)

Hospital uses CT scans with type HiSpeed CT/e Dual with the same manufacturer with MRI at General Electric (GE Medical Healthcare). The size of each image slice is generated by an average of 512 Kbytes. The advantages of CT Scan is a facility of this type Advanced Artifact Reduction (AAR) is useful for producing good image quality by increasing the image signal is weak, the CT scan is also able to produce images with a slice in millimeters so that the resulting image has good detail. CT scan equipment On LAN network has the following specifications: -

136

Equip. Nam e

IP Address SubnetMask

: Computed Tomography Scan (CT ) :S 172.31.30.137 : 255.255.255.128

-

3.

Gateway AE Title Port Number

CITEE 2012

: 172.31.30.129 : CT99 : 4006

Magnetic Resonance Imaging (MRI)

The type of MRI used is 1.5 Tesla Signa HDi manufactured by General Electric (GE Medical Healthcare). MRI has several advantages of this type have such fast image acquisition process with the result that good quality images, can produce a parallel picture, image acquisition techniques that are innovative, have the ability to upgrade to a higher system. MRI can produce detailed images of soft tissue structures in the human body, MRI imaging can be used in imaging the body's organs such as liver, kidney, pancreas and blood vessels in the abdomen. The resulting picture is very helpful in diagnostics and evaluation of tumor, infection and organ functional damage. MRI is an imaging modality that is safe for male and female reproductive systems having no radiation produced. The size Of data per image on average is 128 Kbytes. MRI on the LAN network equipment has the following specifications: -

4.

Equip. Nam e

IP Address Subnet Mask Gateway AE Title Port Number

: : : : : :

Magnetic Resonance Imaging (MRI) 172.31.30.145 255.255.255.128 172.31.30.129 GEMR 4006

Angiography/Cathlab

Angiography Equipment / cathlab the imaging equipment used in the process of vascular catheterization. At a hospital radiology unit, the equipment used was Angiography of GE Healthcare Medical. The results of image frames can be combined so that into an image cinema. Of data generated images have an average size of the image produced by 152.59 Mbytes. Angiography on the LAN network equipment has the following specifications: -

Equip. Nam e

5.

Ultrasound

: : : : : :

IP Address Subnet Mask Gateway AE Title Port Number

Angiography/Cathlab 172.31.30.137 255.255.255.128 172.31.30.129 TERRA 4002

In this unit, there are 2 Ultrasound modality used to acquire medical images, large image data is an average of 1406.25 Kbytes. On the LAN network of the first ultrasound equipment has the following specifications: -

Equip. Nam e

IP Address Subnet Mask Gateway AE Title Port Number

: : : : : :

Ultrasonograp hy (USG) Voluson 730 172.31.30.127 255.255.255.128 172.31.30.129 VOLUSON730 104

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

LAN networking equipment on the second ultrasound has the following specifications: -

Equip. Nam e

6.

Laser Imager

IP Address Subnet Mask Gateway AE Tittle Port Number

Ultrasonograp hy (USG) Logic 6

: : : : : :

172.31.30.137 255.255.255.128 172.31.30.129 LS6 104

Equip. Nam e

IP Address Subnet Mask Gateway AE Title Port Number

= 9.6 µs on 10 Mbps Ethernet = 0.96 µs on 100 Mbps Ethernet

: : : : : :

DRY VIEW 8900 172.31.30.135 255.255.255.128 172.31.30.129 DV8900BSD 5040

Bw 

 bits Mb / s

(1)

t

To calculate the data rate in data communications is as follows: Data rate =

FSize t transfer

bytes / sec

(2)

In the process of sending or receiving data on a data communications will be a delay, delay is the time required to transmit data until the data is received. To know the size of a delay in data transmission can be determined by the following formula: dtotal = dproc + dqueue + dtrans + dprop

ms

(3)

Processing delay

Processing delay is the delay that occurs on the repeater or switch, more repeater is used the greater the processing delay. Delay on Class-2 repeater is 92 bits. dproc = 92 bit × 100 ns = 0.0092 ms 2.

= 0.096 µs on 1000 Mbps Ethernet 3.

Transmission delay

Transmission delay is the delay that occurs in the process of sending data, this delay can be calculated by the ratio between the packet data and the amount of bandwidth used. 4.

Propagation delay

Propagation delay is the delay that occurs due to the propagation of signal transmission, the delay was influenced by the distance between transmitter and receiver. Speed of signal propagation in optical fiber transmission media or UTP was 2 x 108 m / sec. IV.

B. Data Communications in PACS In the PACS and medical imaging equipment at Hospital Radiology Unit using Fast Ethernet Fast Ethernet which has a 100 Mbps bandwidth and maximum data rate of 11.92 Mbytes / s. equation of bandwidth by using the following formula:

1.

Queuing delay is the delay that occurs in the queue of data packets or frames which are transmitted via Ethernet to be made in the data packet or frame before it is sent. The amount of queue delay between the frames on the Ethernet is 92 bit time; a large delay can be calculated as follows: dqueue = 96 bit time

Laser Imager is the equipment to print the images produced by medical image acquisition device, the type of laser imager at the hospital radiology unit is Kodak DryView 8900. Type laser imager has several advantages including film had 3 places that can accommodate multiple film sizes, printed on a laser imager has a high resolution, connections in the network using UTP cable as transmission media and supports the DICOM Print. Specifications on the laser imager line network as follows: -

ISSN: 2088-6578

PACS PERFORMANCE ANALYSIS

Performance measurement is an effort in improving the efficiency and effectiveness of a network to improve the productivity of work on the network. On a variety of communications services to utilize network resources optimally required assessment Quality of service (QOS) on the network. The increasing range of services will increase the traffic flow of packets with different rates of speed, which would require the ability to pass network packets flow at a certain rate of speed. QoS is the ability of networks to provide better service at a certain traffic through a variety of networking technologies including IP (Internet Protocol), Frame relay, ATM (asynchronous Transfer Mode) and ISDN (Integrated Systems Digital Network). In the analysis of PACS performance testing will be done the calculations and data transfer or data rate and latency, or delay, while the jitter and packet loss parameters are not discussed in the analysis of the Radiology PACS Hospital. In the process of analyzing network calculations are performed on the PACS image size or image size of each modality are different, the purpose of this calculation to calculate the required capacity of the hard disk that can store all data for a specified time. The calculation of the size of the image can be determined by the following formula: Iz = PSize × Pbyte bytes

(4)

Iz = Image size Data Psize = Image size in pixel units Pbyte = size in bytes of data per pixel

Queuing delay

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

137

ISSN: 2088-6578

Yogyakarta, 12 July 2012

A. Bandwidth and data rate calculations on the modality Bandwidth can be defined as a data transfer rate per unit time. Bandwidth is a measure of how much information can flow from one place to another within a certain time. Bandwidth in a TCP / IP Ethernet is the capacity of traffic that can pass data packets within a certain amount. Bandwidth consumption can also mean the amount of data packets per unit time is expressed in units of bits per second (bps). Based on the existing network bandwidth, data rate, assuming the same modality with data rate bandwidth then the ideal time is needed each medical imaging modality in the transmit image data to a PACS can be calculated. For CR modality that has an image size of an average 10 000 Kbytes, with a bandwidth of 100 Mbps data rate 12207.031 Kbyte/s the length of time to send the image as follows: Data rateBW = Data rateCR

-

CITEE 2012

Total delay frame/queue

= 0,96 µs x 6826,66

processing delay with 4 repeaters

frame = 6553,59 µs or 6,55 ms = 4 x 92 bit x 100 ns = 0,000368 s or

-

0,368 ms Transmission delay of 100 Mbps link bandwidth and packet length of 1500 bytes can be calculated as:

d trans

Propagation delays on the physical link length of 100 meters and the propagation velocity of 2×108 m/s can be calculated as:

(5)

Ttransfer TABLE I.

d prop

10000  12207.031  0.81s

Which is lower than maximum allowable limits So that the total delay can be calculated using equation (3) as the following:

Image Size (Kbytes)

Transfer Time (s)

10000

0,81

1

CR

2

CT Scan

512

0,0419

3

MRI

128

0,0104

4

Cathlab

7812,5

0,64

5

USG

1406,25

0,11

B. Data rate calculations using data from PING In the analysis of PACS performance at Hospital performed the analysis on the data rate and delay is in the process of sending data from any existing modality. Quality connection to the PACS network can be determined by testing the communication between several modalities that are connected to the PACS. Connectivity testing function to find out that some of the modalities that are connected is ready for communication and lines of communication are used no problems in the network. The results of testing this connectivity serve as material for the analysis of data from multiple modalities of data transfer is done by PING method with data packets. The calculation of delay in delivery of data from the CR: -

138

dtotal = 0,368 + 6,55 + 0,12 + 0,0005

TIME OF DELIVERY MODALITY IMAGE BASED ON BANDWIDTH

No. Modality Name

CR Im age D ata Size CR Image Transfer Time Inter frame delay

100m 2  10 8 m / s  0.0005ms

d prop 

12207.031 10000  1 Ttransfer Ttransfer

1500bytes  8bits 100 Mbps  0.12ms

d trans 

= 10000 Kbytes = 4,437 s = 0,96 µs

dtotal = 7,0 3 8 m s The calculation of delay in delivery of data from some other modality can be calculated by the same calculation with the calculation of delay on the CR, the results of calculations are presented in Table II. TABLE II.

CALCULATION RESULTS OF IMAGE DATA PING TRANSMISSION

Modality

1500byte 1 image data Average Image data size transfer time Time (Kbytes) (s) (ms)

Bit rate (Mbps)

CT

1,0

0,349

512

12

MRI

0,9

0,078

128

13,44

Cathlab

0,75

3,99

7812,5

16,04

USG

1,15

1,104

1406,25

10,43

C. Analysis of picture Data Delivery Test In the testing performed by sending the image data sent from modality to PACS images and measured delivery time using a stopwatch or timer using a stopwatch or timer is due to the modality and PACS do not have a delivery time of measurement facilities. The sizes of data used during test are: 1.

CR data picture, 9.2 MB.

2.

CT Scan image, 572 Kbyte

3.

MRI data

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

4.

Cathlab image data

-

Average number of patients per day = 5 patients

5.

Ultrasound image data, 1016 Kbytes

-

The number of days per year = 365 days

-

Length of storage period = 5 years

-

Predicted increase in patients per year = 5%

The measurement results of several test images delivery modality can be seen in Table III. TABLE III.

COMPARISON OF MEASURED TIME DATA DELIVERY MODALITY IMAGE

No. Modality

Object

Size

Transfer time (s)

Data rate (Kbyte/s)

1

CR

Abdomen

9,2 MB

9

1046,75

2

CT Scan

Pelvis

572 Kbyte

2,5

228,8

3

MRI

Kepala

366 Kbyte

3

122

4

Cathlab

Jantung

4852 Kbyte

6

808,6

5

USG

Janin

1016 Kbyte

4.5

225

The storage media required for mammography data store is (117.9Mbytes × 5 × 365 days + 5%) × 5 = 1129629.38 Mbytes (1103.15 Gbytes) Total CR image data storage media is 1103.15 + 149.7 Gbytes = 1252.85 Gbytes For the calculation of the storage media needs on the other modality can shown in Table IV. TABLE IV.

CALCULATION RESULTS OF DATA STORAGE MEDIA ON MULTIPLE MODALITY

D. Analysis of picture Data Delivery Test PACS is a data center that stores the entire image is sent from multiple modalities. PACS Server storage capacity analysis aims to determine the existing storage capacity to accommodate all the image data so that image data can be accessed quickly as needed.

Daily Numbar of Storage Size images per Number of (GBytes) patient Patients

Modality

Image Specification

CT

512 x 512 x 2 byte

100

8

748,5

MRI

256 x 256 x 2

200

3

140,35

200

5

469,38

20

15

154,27

Cathlab 2000 x 2000 x 2 USG

600 x 800 x 3

The average number of images per examination CR is one picture so the picture size for one-time examination the average CR 10 000 Kbytes. Average number of patients for CR examinations in one day is 80 people so can be calculated the amount of data storage needs for over five years as the following:

Total requirements for data storage media and images on the PACS Server are as follows:

-

CR image size for 1 patient = 10000 Kbytes

2. CT image data = 748.5 Gbytes

-

Average number of patients per day = 80 patients

3. MRI image data = 140.35 Gbytes

-

The number of days per year = 365 days

4. Image data cathlab = 469.38 Gbytes

-

Length of storage period = 5 years

5. Ultrasound Image Data = 154.27 Gbytes

-

Predicted increase in patients per year = 5%

Total number of image data = 4694.491 Gbytes or 4.58 TBytes.

By using equation 3.8, then the storage media required for CR data store is (10000 Kbytes × 365 days × 80 + 5%) × 5 = 1.533 billion Kbytes (1461.98 Gbytes) Mammography images must be calculated separately because the image has a high pixel resolution is 6400 x 4800 with a large data per pixel is 2 Bytes.

1. CR image data = 1687.9 Gbytes

Storage media needs at least 4.58 Tbytes of PACS Server, with calculating the capacity of storage media is expected all the image data from the modality can be stored in the PACS Server. V.

CONCLUSION

1 kilobyte = 1024 Bytes

From the calculations and analysis of image data transmission modality to PACS at Hospital Radiology Unit can be drawn some conclusions as follows:

61440000 Bytes / 1024 = 60 000 Kbytes

1.

Transmission Bandwidth Based on the data used is the Fast Ethernet 100 Mbps and the calculations with the data PING, modality requiring greater bandwidth is the CR modality. CR has a bit rate of 18.46 Mbps.

2.

Delay major / minimal latency in the transmission of image data is image data transmission from the MRI modality is 0.572 milliseconds and a maximum delay of data delivery modality images of CR is 7.038 milliseconds.

6400 x 4800 x 2 Bytes = 61.44 million bytes

Average number for each examination Mammography is 2 images 60 000 Kbytes x 2 = 120 000 117.19 Kbytes or Mbytes Average number of patients for examination Mammography is 5 persons so can be estimated needs for data storage for five years as the following: -

Mammo image size for 1 patient = 117.9 Mbytes

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

139

ISSN: 2088-6578

3.

4.

Yogyakarta, 12 July 2012

Analysis of Delay or latency that occurs, including the excellent category in accordance with the performance of Latency. Calculation of storage capacity in PACS Server is required to be able to store data for five years was 4.58 TBytes, with the specifications of existing PACS Server is 12 TBytes be able to store all the images of some of the modality.

ACKNOWLEDGMENT 1. Mr. Dicky, Information Technology Manager of PT. Tawada Healthcare 2. Mr. Dadang from Kodak Health Imaging Systems REFERENCES [1] [2] [3] [4]

140

CITEE 2012

Kodak Carestream PACS manual, Carestream Healthcare Inc., Rocester New York H. K. Huang DSc, FRCR (Hon.) FAIMBE, “PACS And Imaging Informatics”, Second Edition, John Wiley & Sons Inc., 2010 William Stallings, PhD., “Data And Computer Communications,” Fourth edition, Prentice Hall International Inc., 1994 H.J. Chao, X. Guo, “Quality of Service Control in High Speed Networks”, John Wiley & Sons, Inc. 2002

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Middleware Framework of AUV using RT-CORBA Nanang Syahroni1) and Jae Weon Choi2) 1) Politeknik Elektronika Negeri Surabaya, Surabaya 60111, Indonesia 2) Pusan National University, Busan 609-735, Korea [email protected], [email protected] Abstract – In this paper, an alternative approach is proposed and investigated, and it will enable to interact between various navigation method and control components, during the course of mission, information from inertial navigation system (INS), long base-line (LBL) and short base-line (SBL) may need to be rerouted between autonomous underwater vehicle (AUV) subsystem or control components, certain data may became temporarily very important and at other time not needed at all. A new software infrastructure called Open Control Platform (OCP) will accommodate changing navigation information and control components, interoperate in heterogeneous environments, and maintain viability in unpredictable and changing environments. In conducting the experiments began with a single machine measurement of each of its system performance, sensors properties, and then moved to multiple machines connected with common object request broker architecture (CORBA) network. In the experiment results, we have a various results, methods, and performances which are obtained on different classification. Key words: AUV, CORBA, OCP, INS, LBL, SBL

I. INTRODUCTION There are several navigation systems currently employed by AUV, the non-acoustic approach consist global positioning system (GPS) receiver and INS installed on the AUV; the vehicle navigates with the INS but periodically comes to surface to receive the GPS signal and to recalibrate the INS. The acoustic approach can be subdivided in the LBL and SBL system. In both case, the vehicle position is determined on basis of the acoustic returns detected by a set of receivers. In the LBL case, a set of acoustic transponder is deployed on the seafloor around the perimeter of the area of operation. The transponder than able to track the vehicle, or the vehicle is able to locate itself with respect to the transponders. In the SBL system, a ship follows the AUV at short range with a high-frequency directional emitter able to accurately determine the AUV position with respect to mother ship. All these navigation method have their disadvantage, INS requires in any case the setup of sophisticated inertial sensors and GPS calibration, LBL system requires long time for calibration, and SBL system need a ship to follow the vehicle. A simple alternative to LBL systems has been investigate, on the buoys GPS receivers with radio interconnection are also installed. Each buoy emits a regular time intervals a ping with the codes information of its GPS position. The vehicle listen for the pings, and again it determines it absolute position from time-of-light measurement, with one of buoy acting as master, and

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

collecting information from the other buoys and determines an absolute position of the vehicle. A new software infrastructure called Open Control Platform (OCP) will accommodate changing navigation information and control components, interoperate in heterogeneous environments, and maintain viability in unpredictable and changing environments [1-4]. This paper first looks at the detail of OCP and followed by a control structure of AUV and demonstration of the OCP integration capabilities using AUV thruster fault reconfiguration scenarios. II. OPEN CONTROL PLATFORM In the [1], the information-centric control and engineering have a remarkably successful history of enabling for designing, testing, and transitioning embedded software to unmanned air vehicle platforms (UAV). The control design is evolving through the development of hybrid optimal control, reach ability analysis, multiple model system, and parameter varying control. The software is being facilitated by distributed computing and messaging service, distributed object model, real-time operating system, and fault detection algorithm. The Open Control Platform (OCP) is a software technology component that acts as mutual catalysis of such control and embedded software technologies that able to push the boundaries of performance, complexity, and applicability. The OCP extends CORBA to support event stream and other middleware requirement. In additional, OCP is essentially a transaction-based system with relevant functionality imposed as new layer. This OCP responds to information from the various external and internal system sensors by prioritizing the response command with a given quality of service (QoS). An approach to this goal begins with system and subsystem architectures that are organized into hierarchical object or event levels, where each level represents a structure of problems and its supporting algorithm, with this approach it appears possible to implement the autonomous decision making within the overall time requirements. The OCP was developed based on the goal to combine control theory and software design to allow for utilization of new control technology on embedded computing platform. The major components of the OCP software are as follows:  A middleware framework base on Real-time Common Object Request Broker Architecture (RT-CORBA).

141

ISSN: 2088-6578

Yogyakarta, 12 July 2012

To provides the mechanism for connecting application components together to control their execution.  A simulation environment. To allows the embedded application to execute and be tested in a virtual world, reading simulated sensors and driving simulated actuators on vehicle models.  Tool integration support. To provides linkages to useful design and analysis tools, such as Matlab, to allowing controller design realized in these tools to more easily transition to embedded targets.  Control Application Programmer Interface (API). To provides a control domain-friendly look and feel to the OCP. This is accomplished by using familiar terminology and simplified programming interface. Hybrid Control API GUI, Control-domain API and code generation Reconfigurable Control API Resource management, adaptive scheduling, mode transition, replication, event service, etc (OCP extension to TAO) Core OCP Real-time distributed computing substrate naming service, timer service, etc. (RT CORBA) Operating System (OS) (Kernel, board and devices support package) Hardware (CPU, Memory, I/O, etc) Figure 1. Layers of the Open Control Platform

Figure 1 shows the layered architecture of the OCP run-time software, which is built upon commercial RT Operating Systems (RTOS) and a board support package for the specific hardware. The first layer above the OS is a CORBA implementation including lower level services (naming service, timer service, etc) which provide additional real-time performance enhancements. The extension layer of OCP in the second layer above the OS builds upon the core capabilities of RT CORBA to add higher-level capabilities to the OCP that form the application interface to the underlying middleware, this layer of the OCP provides a pattern-oriented extension to RT CORBA. Researchers developed the RT CORBA by leveraging the capabilities of the Adaptive Communication Environment (ACE™) Object Request Broker (ORB) and named it The ACE ORB (TAO) [2]. The OCP builds extensions upon TAO core capabilities to incorporate additional functionality necessary in UAV control applications. These extensions reflect needs at the CORBA layer as well as at a higher level of abstraction. Specific middleware extensions include (1) resource management, (2) adaptive scheduling, (3) mode transition services, (4) support for highly accurate time triggering of control functions (sensor data capture, control law execution), and (5) execution time optimizations to meet hard RT processing demands. Highest-level OCP include (1) controls-specific application programming interfaces that hide CORBA from the controls designer, and (2) code generation tools that support development of executable code from a

142

CITEE 2012

graphical system representation. The OCP middleware structure includes extensions needed would be providing insights applicable to a wide range of real-time system problems for AUV [3]. Initially, the goal of OCP is to bring the advantages of Object-Oriented programming to the domain of Unmanned Air Vehicle (UAV) multi-level flight control that provides a software infrastructure that will enhance the ability to analyze, develop, and test UAV control algorithms and embedded software [4]. The design methodology is on OCP paradigm to integrating control technologies and resource, which is using real-time distributed computer technology to coordinated distribute data, organize the interaction among hierarchically components, and to support dynamic reconfiguration of the components. And the implementation step will consists of 2 phase, that are 1) Modeling control application: usually done by control engineers with support from tools, such as Matlab or Simulink, and 2) Implementing control design: sub-discipline of software engineering by tightly coupling model, generating tailored C++ source code, and platform that control designs impose hard real-time requirement. III. HIERARCYCAL CONTROL The global architecture for the control of AUV consists of a hierarchical of three levels. At the highest level a mission planning component stores information about the overall mission, generates a low level representation and coordinates its execution with the middle level of that mission. The middle level includes a trajectory planning component, which receives information from the high level in terms of the next task to be executed to fulfill the mission, generate the trajectory or set points for the low level controller. At lowest level, an adaptive mode transition controller or the active control models, those stabilize the vehicle and minimize the errors between the set points generated by the middle level and the actual state of the vehicle. A. High Level: Mission Planning The mission planning component translates a high level task queue and coordinates the execution of lowlevel tasks with the trajectory planning component at the middle level. The mission can be established as a sequence of actions to be executed, for instance: dive to a way point and hover there, dive to away point at certain speed, keep the same velocity and heading for a certain period of time. Cinematic constraints like maximum speed and acceleration are specified and can be change for each section of the mission. A mission may be completely specified before it is executed but may also be modified, or re-planed or expanded at run time. This feature enables the modification or extension of the mission at run-time. Replanning is particularly important for the future incorporation of obstacle and collision avoidance

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

algorithm. At run time, the mission planning component coordinates the execution of the low-level tasks with the trajectory generation component at the middle level in the following way: first, the mission planning component takes the task at the head of the task queue, removes it from the queue and sends the task information to the trajectory generation component; then, the trajectory generation component executes the task and, when completed, it sends a signal back to the mission planning component indicating that the last task has been completed; finally, when the mission planning component receives the signal, takes the next task from the head of the task queue and the cycle is repeated until no tasks remain in the task queue. B. Middle Level: Trajectory Generation The trajectory generation component generates the set points required for the low-level controllers to complete the last task received from the mission planning module. When the trajectory generation component receives the next task information, it computes a 3D set of point’s line to generate a continuous path linking the actual position with the target position. At each sample time, the actual speed is evaluated based on the initial speed for the task, the final speed for the task, and the maximum acceleration available according to the curvature of the path at that time. After generating the set points corresponding to the actual sample time, the condition for completion of the task is checked and a comparison is made with the actual state of the AUV to determine if the task was completed successfully or not. A signal is sent to the high level module indicating the termination status of the task so that the next one can be initiated. C. Low-Level: Adaptive Mode Transition Control The purpose of the low level controllers is to stabilize the vehicle and force it to follow accurately the commanded trajectory generated by the middle level. In this architecture, a adaptive mode transition control or blending local mode controller was introduced in several paper. The adaptive mode transition control consists of the mode transition control component and the adaptation mechanism component. D. Human Intervention Human intervention can be accomplished with no change at all to the fundamental structure of layered control. There are a variety of methods by which operator input can be accepted into the structure, each providing a different level of control of the vehicle. Three different levels of control are now described. User Override Mode: The lowest level of control of the vehicle would be for the operator to generate commands for the dynamic controller (e.g. heading, depth, velocity, etc.). The vehicle acts like a docile robot with just enough

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

intelligence to override commands that would cause it to endanger itself. Behavior Modification Mode: The behavior modifycation mode allows the operator to influence the vehicle by changing the internal setting of vehicle behaviors. It assumes the vehicle is basically operating as it would under autonomous operation. However, unlike autonomous operation, an operator can modify the performance of individual behaviors while the vehicle is active. For example, the user might change the threshold of an obstacle avoidance behavior or the parameters of a survey. Mission Modification Mode: The mission modification mode provides the user with the ability to activate and deactivate behaviors, and set their priorities. In this mode, the vehicle operates as an autonomous agent that receives high-level instructions from the operator during run time. An example is the user commanding the vehicle to abort a survey to return to a docking station by deactivating the survey behavior and activating homing. Mission modification is the most powerful level of user control possible within the framework of the layered control. IV. HARDWARE SYSTEM The hardware controller architecture is designed in modular system to operate using two independent cooperating embedded computer system linked through an Ethernet network interface for improved a load balancing, this network structure is facilitate by using event channel communication service of RT CORBA communication capabilities for message and command interchanges. Each embedded system assume different task for mission operation. Each modular subsystem for the actuators and sensors are includes a feedback control subsystem to adjust their performances according to the middle-level command and its behaviors of a lower level command. Each modular subsystem base on the Single Board Computer (SBC) embedded system platform to perform middle level processing system and serving the content information server to application level for Mission Control Station (MCS). To communicate between AUV and MCS, the Acoustic modem is provided. An acoustic network is a relatively low-bandwidth, long time-delay communication link compared to links typically used to control robots. Layered control provides an attractive framework for remote control of an AUV under these conditions. A. Initial Experiment The complete simulator consists of two system, AUV simulator model and Mission Control Station (MCS). The AUV simulator consists of a cube-shape body with a diameter of 50 cm length, 30 cm width, and 20 cm high. This model is simulated to the underwater environment that supported on a hemispherical water bearing. The platform houses all the various components (PC, sensors, and actuator) as seen in Figure 2.

143

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

using PC parallel port line with PWM output which generated by computer program for precise torque control. During initial testing, this motor drivers ware observed to produce high temperature in 12V, which can damage the power transistor as the main electronic components. Therefore, a resistor with higher resistance was used to clip the bias current caused by the optocoupler AN25, and a NAND gate 74LS00 was used to perform as inverting any PWM output from PC parallel port with the standard 5V digital output. The power distribution schematic of motor driver is shown in Figure 4. Figure 2. AUV Simulator Model

The four propellers are mounted on the AUV platform that can be operated in momentum or reaction mode. The two propellers for forward/backward thrust movement, one propeller to side thrust movement, and one propeller for ascend/descend thrust movement. The propellers are each paired with a DC motor and H-bridge amplifier. The encoders that are installed on the dc motors provide angular position feedback. Power for the entire system is provided by 12-V power supply connected pair wise in parallel to provide 12V at a time. On the AUV simulation platform, a PC-type Pentium III 733MHz computer board is used for data acquisition, recording, controller implementation, and communication. The AUV PC1 and the mission base station PC2 communicate via RT-CORBA event channel that connected to the LAN by 19kbps line modem instead of underwater acoustic modem. The base station PC monitors the status of the experiments and issues control commands such as start/stop, whereas the AUV simulator PC unit runs the control algorithm. An outline of the interconnection of the several simulator subsystems is shown in Figure 3. In the next section, we briefly describe the major subsystems experiment of the simulator.

Figure 3. AUV Sensor/Actuator Interconnection

B. Actuator System The four CIC DC-Geared Motor (Model JC-35L/H12), with a 12-V nominal voltage were used as actuators in lieu of expensive. Each motor is driven by a H-Bridge power amplifier using 6 amperes complementary silicon transistor TIP41 and TIP42, were made by Mospec. It can deliver continuous power of 60W from each transistor within peak voltage of 100V. Motor drivers are drives

144

Figure 4. Thruster/Rudder Motor Driver

The electronics suite includes the CPU unit, the I/O board, interface board, and acoustic modem. The CPU board runs the software that handles all hardware controls, timers, packet-based communication, and the filters for the sensor signals. A PC-type Intel Pentium III-733MHz from Samsung (model M2761) was used as the AUV central processing unit. It has 64 MB of main memory, a 2.5GB hard disk, two serial ports, and one parallel port. To meet all I/O requirements, a 128MB additional SDRAM memory and 20GB hard disk was installed, and additional 8255 Programmable Peripheral Interface (PPI) board also installed. The 8255 allows for three distinct operating modes (Modes 0, 1 and 2) as follows: Mode 0: Ports A and B operates as either inputs or outputs and Port C is divided into two 4-bit groups either of which can be operated as inputs or outputs. Mode 1: Same as Mode 0 but Port C is used for handshaking and control. Mode 2: Port A is bidirectional (both input and output), Port C is used for handshaking and Port B is not used. Each port is 8-bit TTL-compatible. As such, the 8255 can conceivable be configured to control 24 devices (1 bit/device). The various modes can be set by writing a special value to the control port. The control port is Base Address + 3 (hence 640+3 = 643 Decimal). The motor driver originally connected to the IEEE 1284 standard parallel port use 25 pin female (DB25) connector (to which printer is connected). On almost all the PCs only one parallel port, but we can inserting additional ISA/PCI parallel port cards. The Status, Data, and Control lines are connected to there corresponding registers inside the computer. For a typical PC, the base address of LPT1 is 0x378 (0x278 for LPT2). The Data register resides at this base address (0x378), Status register at base address + 1 = 0x279 and the Control register is at base address + 2 = 0x27a. C. Sensor System

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

The Guidance, Navigation and Control (GN&C) system constitutes a central part from the initial study phase to final control system in this research. In AUV navigation systems the fusion of data between many types of sensors is important. These sensors often include GPS, altimeters, rate gyros, accelerometers, radio links and databases containing geographical information. The information contained in AUV databases covers types of terrain, altitude profiles and obstacles such as rock, debris, and masts. The some technologies are applied to navigation, landing and anti-collision systems for AUV. Due to the increased interest in underwater surveillance and utilizing systems with precision effect, must now be more accurately controlled and navigated. The end model of AUV navigation systems utilize internal sensors, such as compasses, gyroscopes, inertial platforms and acoustic sonar systems for terrain navigation. To attain sufficiently high precision the data is usually combined from all available sensors in advanced filter algorithms. Simulation models of this AUV are used, partly to ensure the performance throughout the development phase and partly to enhance the importance of the verification of the complete trial to experiment. In order to obtain the results that fully represent reality the models are constantly validated against the performed tests.

ISSN: 2088-6578

plotting. The PC1 and PC2 remote computer communicate via acoustic serial modem at a speed of 56 k baud. The OCP software development is describe in Figure 5. The hardware executable module for the PC1 implements PWM hardware motor control initialization with 1 ms pulse width standard for motor speed control and adjustment. To increase the rotation for constant trajectory the pulse widths are increase automatically by software module. The user in the MCS can set up the AUV main control loop frequency starting from 100 Hz was used for all experiments described in this paper and mission mode status (take-off, landing, and cruise). The remote PC in MCS than runs a separate view-based graphical user interface (GUI) program that implements the monitor view, message view, and other GUIs. The programs on the AUV PC communicate through the serial connection with four types of custom-developed packets: data, message, file, and command packets. During the experiment, the time histories of all state variables are recorded in the memory disk of the MCS PC in binary database format. After the experiment, the data file is transferred to a MATLAB for further analysis as depicted in the figure 6.

V. SOFTWARE SYSTEM Two software applications were developed separately for both the PC1 AUV computer and the PC2 remote mission base computer by Open Control Platform (OCP) middleware infrastructure. This OCP structure will enable to create and testing the software and hardware performance by using the latest of real-time data interchange using CORBA. This real-time environment and communication on the OCP structure also enabling to perform control system reconfiguration at the runtime during failure detection, and than applying the new control system algorithm.

Figure 5. OCP Development Process

The OCP software infrastructure for PC1 is focused on trajectories and control algorithms calculation and reconfiguration system, then PC2 latter are used to send start/stop commands for health monitoring and mission strategies, for post experiment is for data analysis and

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 6. Complicated Matrix Operations with Matlab

In the figure 6, another advantage to using the interface between C++ and Matlab is to handle complicated matrix operations, especially multidimensional matrix operations, the block diagram is depicted in figure above is the operation principle. Before calling the Matlab matrix functions via the interface, the C++ program should collect the data from the control system (hardware) and create the data files in ASCII format. After that, the Matlab matrix operations are called by C++. Matlab first picks up the data from the data files saved by C++ and then performs the matrix operations based on the data. The results can be sent back to C++ by Matlab in the file format, and the C++ program can continue to execute the control task based on the results sent by Matlab. VI. EXPERIMENTAL RESULT In conducting the experiments began with a single machine measurement of each of its system performance, sensors properties, and than moved to multiple machines connected with a CORBA network. With these

145

ISSN: 2088-6578

Yogyakarta, 12 July 2012

experiments we have a various results, methods, and performances which are obtained on different classification. However only the most interesting subsets which are related to event channel communications are reported as shown in following figures, most of them are documented for the message latency and throughput of the network that effected by number of clients.

CITEE 2012

in GN&C Laboratory while event channel is consider to distributing the navigation information over the network.

Figure 9. Message Latency vs Number of Clients

VII. CONCLUSIONS AND FUTURE WORKS

Figure 7. Average Speed vs Message Size

The first result shown in figure 7 is a comparison of the performance or speed of standard 10Mb Ethernet network. On average, the time needed to send a message of the specified size through the CORBA using Ethernet transaction took approximately 1000 kb/sec for the message above 50kbytes. The 512 bytes buffer size was chosen for system compatibility reason; however the message latency in small size of message is not efficient.

Our proposed method captures the performance evaluation and experiment results demonstrated that could be used to provide the additional delivery method for distributing any navigation message among AUV implementation. Our future works would be conducted on additional experiments and measurements in this area, to increase application scalability to distribute the large navigation information. Furthermore, the major revelation would be discovered in these experiment steps, especially to uncover the effective RT-CORBA programming parameters according to the network parameters that will be affected to the overall performance significantly. 1. 2.

Figure 8. Maximum Throughput vs Message Size

In the figure 8, the TCP throughput of the CORBA for the standard Ethernet network is nearly achieved their maximum theoretical throughput at 8 Mb/sec, or 80% of theoretical maximum speed of Ethernet. Because the CPU of the machine has enough speed to process any data transfer, and buffer size is also small size. For the several destination, the same experiment using 512 bytes frame with 2 to 5 destinations are shown in figure 9. For the large message size, the event channels are still able to meet requirement for more than 5 clients simultaneously, but the major revelation had not been discovered in this experiment, especially to uncover the effective programming parameters and number of clients that will be affected to the overall performance significantly. This research program is an ongoing project

146

3.

4.

5.

REFERENCES T. Samad and G. Balas, Software-Enabled Control: Information Technology for Dynamical Systems. John Wiley & Sons/IEEE Press, 2003. Douglas C. Schmidt and Fred Kuhns, An Overview of the Real-time CORBA Specification, IEEE Computer special issue on Object-Oriented Real-time Distributed Computing, edited by Eltefaat Shokri and Philip Sheu, June 2000. Linda Wills, Suresh K. Kannan, Bonnie S. Heck, George Vachtsevanos, Carlos Restrepo, Sam Sander, Daniel P. Schrage, and J.V.R Prasad. An Open Software Infrastructure for Reconfigurable Control Systems. Proceeding of American Control Conference, Chicago, Illinois. June 2000. James L. Paunicka, David E. Corman, and Brian R. Mendel, A CORBA-Based Middleware Solution for UAVs, Fourth International Symposium on ObjectOriented Real-Time Distributed Computing, Magdeburg, Germany, May 2001. Valavanis, K.P., Gracanin, D., Matijasevic, M., Kolluru, R., and Demetriou, G.A., Control architectures for autonomous underwater vehicles Control Systems Magazine, IEEE, Volume 17, Page:48 – 64, Dec. 1997.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

CONVOLUTE BINARY WEIGHTED BASED FRONTAL FACE FEATURE EXTRACTION FOR ROBUST PERSON’S IDENTIFICATION Bima Sena Bayu D.(1), Jun Miura(2) (1)

Dept. Of Computer Engineering, Electronic Engineering Polytechnic Institute of Surabaya (2) Dept. Of Computer Science and Engineering, Toyohashi University of Technology Email : [email protected], [email protected]

Abstract – This paper presents a novel algorithm for person identification based on Binary Weighted feature of frontal face. We identify a specific person by using frontal face only and ignoring another part of the body. We use Haar Cascade based Face Detection to limit our observation area. By considering the variable distance, we use an auto resize for the cropping image to be 100 x 100 pixels and create a binary image using adjustable threshold. We apply weighting values to each binary value from every pixel to get a vector data from two variables to present the face’s feature in horizontal position and vertical position. We use convolution to simplify and reduce the dimension of the feature data by combining the horizontal feature and vertical feature to be a specific feature vector, and process it using Pearson Product Moment Correlation Coefficient (PPMCC) to measure the correlation between new feature input and features database. The experimental result prove that our algorithm is robust enough to identify desired person under constant lighting conditions. Keywords : Person Identification, Binary Weighted, Haar Cascade, Face Detection, Convolution, Pearson Product Moment Correlation Coefficient

I. INTRODUCTION Research in face recognition fields is growing very rapidly in last few decades. The hardest things to do is how to get the robust face's features related to the some aspects, they are computation speed, accuracy and distance variant. Many methods and algorithms have been proposed to solve face recognition problems. Principal Component Analysis (PCA) and its variance is a very popular methods that were used in face recognition [1, 2, 3] and also Linear Discriminant Analysis (LDA) are other well known techniques [4, 5, 6]. Although PCA is used as a successful dimensional reduction technique in face recognition, direct LDA based methods cannot provide good performance when there are large variations and illuminations changes in the face images. LDA with some extensions such as New LDA [4], Direct LDA [5] and Fisher’s LDA [6] were proposed. The other most commonly used algorithm for face recognition based on frequency, they are Fourier, Wavelet and spectral analysis. Most of them assume the image data as a collection of signals in 1-D. Fei Wang, et al [7] developed a face recognition system based on spectral features analysis (SFA). He claimed that (1) SFA does not suffer from the small-sample-size problem; (2)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

SFA can extract discriminatory information from the data, and he show that linear discriminant analysis can be subsumed under the SFA framework; (3) SFA can effectively discover the nonlinear structure hidden in the data. But it still has a disadvantage, SFA sensitive to outliers. Serdar Cakir et al [8] proposed a face recognition system based on Mel and Mellin cepstral features. He claimed that his proposed algorithm provides robustness against rotation and scale invariance. 2D cepstral domain features extraction techniques provide not only better recognition rates but also dimensionality reduction in feature matrix sizes in the face recognition problem. Other researchers [9, 10, 11] also produced other face recognition methods based on binary image data. They called their algorithm as Local Binary Pattern (LBP). They claimed that by using LBP, the face features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, while still retaining facial information efficiently. LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real world applications where only low resolution video input is available. In this paper, we propose a face recognition system based on binary image. We combined binary weighted method as same as method used by DAC process with convolution to simplify our face feature into 1-Dimensional signal. We use binary weighted method to express the position of the data with more precision. It can exactly detect the valid position of the data. This paper is organized as follows : Chapter I Introduction; discusses about previous works in face recognition field, Chapter II Face Detection ; discusses about face detection by using Haar Cascade Method, Chapter III Convolute Binary Weighted ; discusses about face feature extraction using combination of binary weighted method and convolution method, Chapter IV Face Recognition ; discusses about global face recognition system using Convolute Binary Weighted feature, Chapter V Experimental Result ; discusses about experimental result and performance comparison between Convolute Binary Weighted (CBW) method, Local Binary Pattern (LBP) method and Principle Component Analysis (PCA) method, and finally Chapter VI Conclusion.

147

ISSN: 2088-6578

Yogyakarta, 12 July 2012

II. FACE DETECTION Face Detection is the most rapidly developed topic in the face recognition world. Face Detection is absolutely needed for face recognition because the main object to analyze is the human face. The problem is how to get the human face region from heterogeneous background. Many researchers already done experiments to solve the problem. The biggest and monumental solution to find out the human face region by using Haar Cascade Method was developed by Viola and Jones [12]. They use a feature as the result from Haar Wavelet. Haar Feature is a pair of block contains high interval value and low interval value in 1-Dimension. In 2-Dimension, Haar Feature is a rectilinear block contains of two pairs of high interval value and low interval value arranged side by side. Figure 1 shows the configuration on Haar Features.

CITEE 2012

Figure 2. The filter chain is efficient enough to cluster image region. Each filter is a weak classification contains a Haar Feature. Image region that can pass through all of the filter chain is determined as a face.

Figure 2 AdaBoost Method as a filter chain Viola and Jones called the process as Cascade Classifier or Cascade Filter.

III. CONVOLUTE BINARY WEIGHTED Figure 1 Haar Features. Two-rectangle features are shown in (A) and (B), a three-rectangle feature is shown in (C), and a fourrectangle feature is shown in (D) Combinations of rectangles that are used for the detection of visual object are not actual Haar Wavelet. However, these rectangular combinations are suitable for visual recognition tasks with better result. Because of these differences, this feature is referred to Haar feature or Haarlike features, not Haar Wavelet. For detecting objects in visual image, Viola and Jones combined four main concepts : x Simple rectangle feature, called Haar Feature x Integral Image for feature detection x Machine Learning AdaBoost method x Cascade Classifier to increase detection performance Integral Image is used to determine hundreds of Haar Feature in an image and scale efficiently. Haar Feature can be obtained by subtract average pixel value from dark region against average pixel value from bright region. If the difference value is over the threshold value from the learning process, it can be determined that Haar Feature is available. AdaBoost method is used to choose several weak classification result from integral image and give them weights for each classification results. The dominant weight is a strong classification. Viola and Jones integrate some classifier as a filter chain as shown by

148

3.1. Binary Weighted Binary Weighted method is a method usually used in Digital to Analog Converter (DAC) to differentiate each input toward it’s position. By identifying each input based on it’s position, the DAC output can be calculated. Walt Kester [13], described the DAC process in an electrical circuit. The voltage-mode binary-weighted resistor DAC shown in Figure 2 is usually the simplest textbook example of a DAC. However, this DAC is not inherently monotonic and is actually quite hard to manufacture successfully at high resolutions. In addition, the output impedance of the voltage-mode binary DAC changes with the input code.

Figure 3 Voltage-Mode Binary Weighted Resistor DAC Current-mode binary DACs are shown in Figure 3 (resistor-based). An N-bit DAC of this type consists of N weighted current sources (which may simply be resistors and a voltage reference) in the ratio 1:2:4:8:....:2N–1. The LSB switches the 2N–1 current, the MSB the 1 current, etc. The binary weighted DAC approach is what we use to express the position of the data with more precision. It

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

can exactly detect the valid position of the data. When the data are not shifted, it can be guarantee that the output stable. Image as a 2-D object contains two axis, they are x-axis and y-axis. Combination of x and y axis is a position of specific color. To use the Binary Weighted method, the color image must be transformed into binary image. Binary image is represented using “0” and “1”. 0 means bright color and 1 means dark color. To separate dark and bright color, we can use a hard threshold value, i.e. 128. Figure 4 shows the transformation result from color image into binary image data. 1, ‫ݔ(ܫ‬, ‫ܶ < )ݕ‬ ‫ݔ( כܫ‬, ‫ = )ݕ‬൜ 0, ‫ݔ(ܫ‬, ‫ )ݕ‬൒ ܶ

(1)

A B Figure 4 (A) Color Image and (B) Binary Image data Where

: I*(x,y) is binary value of pixel x,y I(x,y) is color value of pixel x,y T is threshold value, i.e. 128

In this research, our object is human face. Human face contains informations of some parts of the face, they are eyes, nose, mouth and other area of face. As we know, the global position of human eyes, nose and mouth are relatively same for each person. But shape and size of eyes, nose and mouth are variable and litle bit shifted, so it can be unique parameters in detecting human face specification.

ISSN: 2088-6578

If black color means “1” and white color means “0”, we can differentiate each feature by applying a specific weight to each pixel based on its position and sum them up to be one value. The common equation is used to solve the 2 dimensional image is : ‫כ‬ (ேି௬)/ே ‫ = )ݔ(ܹܤ‬σேିଵ ௬ୀ଴ ‫ݔ( ܫ‬, ‫) ݕ‬. ‫ = )ݕ(ݓ ݁ݎ݄݁ݓ ;) ݕ( ݓ‬2 (2) ‫כ‬ (ேି௫)/ே ‫ = )ݕ(ܹܤ‬σேିଵ ௫ୀ଴ ‫ݔ( ܫ‬, ‫) ݕ‬. ‫ = ) ݔ(ݓ ݁ݎ݄݁ݓ ;) ݔ( ݓ‬2 (3)

Where : BW(x) is Binary Weighted value for x-axis BW(y) is Binary Weighted value for y-axis I*(x,y) is binary value of pixel x,y w(y) is weight value for pixel y-th w(x) is weight value for pixel x-th y is y-axis variable x is x-axis variable N is number of pixels Based on equation (2) and (3), weight value is 2(ேି௜)/ே with i=0, 1, 2, 3,…, N-1. It means that MSB has the highest rank, 21 and LSB has the lowest rank, 21/N. This small selection of power has advantages in reducing the rate changes at a particular pixel. If one or more pixels change inside the line, no significant change to the overall value of the total. The stability of features is shown as Figure 6.

Figure 6 The stability of features

Figure 5 Human face’s unique parameters (A) Original image (B) Binary image data According to figure 4 (B), it seen that each person has different model of face features. Under a constant lighting condition, their face’s parameters such as eyes, nose and mouth has a different shape, size and position.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

3.2. Convolution Convolution is a mathematical way to combine two signals into a signal in another form. Convolution uses a formal mathematical operation, such as multiplication, addition and shift data. If the addition or multiplication operates two numbers to produce a third number, then the convolution combines two signals and generate a third signal that is typically viewed as a modified version of one of the original signals, giving the area overlap between the two signals as a function of the amount that one of the original signals is translated. Equation (4) below describes the convolution process : (4) ‫ = )݊(ݕ‬σே ௞ୀ଴ ‫)݊( ݔ‬. ݄(݊ െ ݇) , ݊ = 1, 2, 3, … . , ‫ܯ‬ Where : y(n) is output signal x(n) is first signal h(n-k) is translated second signal N is number of translation M is number of data or signal length

149

ISSN: 2088-6578

Yogyakarta, 12 July 2012

By using convolution, we can unify x-axis data set and yaxis data set to be the simplest single set of data only. Figure 7 shows the result of convolution between x-axis data set and y-axis data set.

CITEE 2012

4.2 Binary Image and Binary Data Binary image is the image that has only two degrees of gray values as black and white. Conversion of black and white image into a binary image is done by floating operations (thresholding). This operation is classifying the degree of gray values of each pixel into two classes, black and white. Figure 10 shows the binary image and binary data.

Figure 7 Convolution of x-axis and y-axis data set

IV. FACE RECOGNITION Face Recognition is a step up process after Face Detection, where it needs a more deep analysis of the human face features. After the targeted person is detected by his/her face, the face will be captured and cropped to avoid the interference with the background image. The cropping image is pure and guaranteed as a face region only.

A B Figure 10 (A) Binary Image and (B) Binary Data 4.3 Convolute Binary Weighted (CBW) Feature Convolute Binary Weighted feature is a combination between horizontal feature and vertical feature of the targeted face. Horizontal feature contains face’s feature from each column of sub window image and vertical features contains face’s feature from each row of sub window image. In horizontal feature, each pixel value in one column will be weighted by 2n. All columns will be processed with the same process. And also in vertical feature, each pixel value in one row will be weighted by 2n. All rows will be processed with the same process. Figure 11 shows the Convolute Binary Weighted Feature results.

Figure 8 Face Recognition Process 4.1 Grayscale Image Grayscale image is very important because we still can display an image in smaller memory or dimensions. Equation (5) below describe the grayscale process : ‫= ݕܽݎܩ‬

ோ௘ௗାீ௥௘௘௡ା஻௟௨௘ ଷ

(5)

Figure 9 shows the conversion of grayscale image from color image.

A B Figure 9 (A) Color Image and (B) Grayscale Image

Figure 11 Convolute Binary weighted Result 4.4 Pearson Product Moment Correlation Coefficient (PPMCC) In statistic, the Pearson Product-Moment Correlation Coefficient (PPMCC and typically denoted by r) is a measure of the correlation (linear dependence) between two variables X and Y. It is widely used in the science as a measure of the strength of linear dependence between two variables. This method was found by Karl Pearson and the correlation coefficient is also called “Pearson’s r”. Pearson’s Correlation Coefficient between two variables is defined as the covariance of the two variables divided by the product of their standard deviations: ߩܺ, ܻ =

150

௖௢௩(௑,௒) ఙ௑.ఙ௒

=

ா[(௑ିఓ௑)(௒ିఓ௒)] ఙ௑.ఙ௒

(6)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

The above formula defines the population correlation FRHIILFLHQW FRPPRQO\ SUHVHQWHG E\ WKH *UHHN OHWWHU ȡ (rho). Substituting estimates of the covariance and variance based on a sample gives the sample correlation coefficient, commonly denoted r : ‫=ݎ‬

σ೙ ೔సభ൫௑௜ି௑൯(௒௜ି௒) మ



ISSN: 2088-6578

Table 1 Comparison result between CBW, LBP and PCA

(7)

ටσ೙ ൫௑௜ି௑൯ .ටσ೙ ൫௒௜ି௒൯ ೔సభ ೔సభ

An equivalent expression gives the correlation coefficient as the mean of the products of the standard scores. Based on a sample of paired data (Xi, Yi), the sample Pearson Correlation Coefficient is ଵ

‫ = ݎ‬௡ିଵ σ௡௜ୀଵ ቀ

௑௜ି௑ ௌ௫

ቁቀ

௒௜ି௒ ௌ௬



(8)

Where : ܺ݅ െ ܺ ቇ , ܺ ܽ݊݀ ܵ‫ݔ‬ ቆ ܵ‫ݔ‬ Are the standard score, sample mean and sample standard deviation, respectively. In this paper, we apply PPMCC as a decision maker with r score threshold is 0.98. It means that r < 0.98, the face is unknown and r •WKHIDFHLVUHFRJQL]HG

V. EXPERIMENTAL RESULT 5.1. Database The proposed algorithm were trained and tested using local face database. We collect frontal face image data from 30 students. Each student’s image is online captured in stable lighting condition with 0.5 meters of distance in front of camera. Each cropped face image is converted into grayscale with same size 100 x 100 pixels. Each grayscale image converted into binary image to get binary image data. 5.2. Procedure and Experimental Work For the experimental purpose, we use a Quad Core Personal Computer with 2.4 GHz speed of processor and 4 GB of RAM. We use Minoru 3D Webcam to capture the online image. All of the programs are programmed by using Visual C++ 6 and OpenCV pre 1.1. In order to compare performance of the proposed Convolute Binary Weighted algorithm, we also create a program for standard Principle Component Analysis (PCA) as used in OpenCV and original Local Binary Pattern (LBP). We want to compare about speed of algorithm, accuracy and effectiveness in change of distance. We limit our experiment in normal and stable indoor lighting condition. Table 1 shows the comparison result between CBW, LBP and PCA.

Based on the result, we can say that CBW is fast enough to create the feature and to be the winner over the PCA and LBP results. CBW can produce the feature in 8.10 ms, while LBP in 14.24 ms and PCA in 24.10 ms. CBW and LBP have highest accuracy against PCA. They can recognize the targeted face 100% from 0.5 meters of distance, while PCA just 76.67%. When we apply the change the distance, CBW looks superior against LBP and PCA. CBW can reach 90% recognition, while LBP and PCA consecutively reach 50% and 70%.

VI. CONCLUSION This paper presented a new and simple algorithm for human face recognition based on binary image, called Convolute Binary Weighted. We compared our new algorithm against Local Binary Pattern and Principle Component Analysis especially in three point of views, they are : speed of algorithm, accuracy and changes the distance. Depend on our experiment, we can explain that our method robust enough against the others in constant lighting condition. For more robust against changes in lighting condition and distance, improvement is possible such as considering the adjustable lighting control or combining our method with others to take the advantages.

REFERENCES [1].

[2].

[3].

[4].

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Alaa Eleyan and Hasan Demirel, “PCA and LDA Based Face Recognition Using Feedforward Neural Network Classifier”, MRCS 2006, LNCS 4105, pp. 199 – 206, 2006, Springer-Verlag Berlin Heidelberg 2006 Jian Yang, David Zhang, Alejandro F. Frangi, and Jing-yu Yang, “Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 1, January 2004 Setiawardhana, Bima Sena Bayu D., Fernando Ardilla, Nana Ramadijanti, Sigit Wasista, Dadet Pramadihanto, Sandra Agustyan Putra, “Metode Gabungan Viola Jones dan Eigen Principle Component Analysis untuk Pengenalan Wajah Berbasis Kamera Pada Robot iGURO”, Seminar On Electrical, Informatics and Its Education, 2011 Li-Fen Chen, Hong-Yuan Mark Liao, Ming-Tat Ko, Ja-Chen Lin, Gwo-Jong Yu,” A New LDA-

151

ISSN: 2088-6578

[5].

[6].

[7].

[8].

[9].

[10].

[11].

[12].

[13]. [14].

152

Yogyakarta, 12 July 2012

CITEE 2012

Based Face Recognition System Which Can Solve The Small Sample Size Problem”, The Journal of The Pattern Recognition Society, pp. 1713 – 1726, 2000 Hua Yu, Jie Yang, “A Direct LDA Algorithm for High-Dimensional Data with Application to Face Recognition”, The Journal of The Pattern Recognition Society, pp. 2067 – 2070, 2001 Wei_Shi Zheng, Jian-Huang Lai and Pong C. Yuen, “GA-Fisher : A New LDA Based Face Recognition Algorithm With Selection of Principal Components”, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol. 35, No. 6, December 2005 Fei Wang, Jingdong Wang, Changshui Zhanga, James Kwok, “Face recognition using spectral features”, Pattern Recognition 40 (2007) 2786 – 2797, Science Direct, 2007 Serdar Cakir, and A. Enis Cetin, “Mel- and Mellin-cepstral Feature Extraction Algorithms for Face Recognition”, The Computer Journal, January 17, 2011 Timo Ahonen, Abdenour Hadid, and Matti Pietikainen, “Face Recognition with Local Binary Patterns”, ECCV 2004, LNCS 3021, pp. 469–481, 2004, Springer-Verlag Berlin Heidelberg 2004 Caifeng Shan, Shaogang Gong and Peter W. McOwan,” Robust Facial Expression Recognition Using Local Binary Patterns”, IEEE, 2005 Francisco A. Pujol, Juan Carlos García, “Computing the Principal Local Binary Patterns for face recognition using data mining tools”, Expert Systems with Applications 39 (2012) 7165–7172, Elsevier, 2012 Paul Viola, Michael Jones, “Rapid Object Detection Using A Boosted Cascade Of Simple Features”, Computer Vision And Pattern Recognition Conference, 2001 Walt Kester, “Basic DAC Architectures II: Binary DACs”, Analog Devices, MT-015 Tutorial Dr. Rick Yount, “Research Design and Statistical Analysis in Christian Ministry : Chapter 22. Correlation Coefficients”, 4th ed. 2006

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Evaluation of Speech Recognition Rate on Cochlear Implant Nuryani, Dhany Arifianto Department of Engineering Physics, SepuluhNopember Institute of Technology Surabaya 60111, Indonesia [email protected], [email protected] Abstract - Cochlear implant (CI) is an implanted prosthesis for people with profound hearing loss. The wordrecognition rate of CI usersis above 90% in quiet situation. However, in the presence of fluctuating background noise(masker) such as in the market, the intelligibility will be degraded significantly. The main problem is due to the acoustical cues of the target speech are overlapped with those of the masker. In this paper we present a technique to improve the intelligibility of CI users with the fluctuating masker. The desired speech (target) and the masker were processed with a channel vocoder to simulate the CI device. The number of activated channel of target was set different from the masker before these two were mixed as an audio stimulus. We then presented these sound stimuli to normal hearing listeners through circumaural headphones in a sound attenuated boothto evaluate the word recognition rate. The listeners first presented the stimuli monoaurally and then, diotically to test the hypothesis that two ears may have access more acoustical cues. The results of the experiment suggest that the diotic presentation has only a slightly higher recognition rate. This result may imply that the implantation on both ears may have a little enhancementon the word recognition in the presence of the masker. Keywords:Cochlear Implant, vocoder, speech intelligibility, target-masker, release of masking.

I.

INTRODUCTION

Verbal communication is the most natural way to communicate daily to each other in a society.However, people with profound hearing loss may have to use other means to convey a message such as sign language that limit the flexibility of expressing themselves. Cochlear implant (CI) is a prosthesis device for people with profound hearing loss implanted into inner ear to replace the function of damaged cochlea [1]. With this device, one may partially restorehis/her ability to hear the incoming signal and differentiate it with the sound noise [3]. Essentially a cochlear implant is designed to produce hearingsensations by using electrical stimulation to the auditory nerve (outer and inner haircells). Recently, CI users are able to achieve more than 90% word recognition rate in a quiet room. However, the current CI does not employ a sophisticated signal processing technique to reject background noise that reduces the word-recognition rate significantly. Usually, when the voice of someone else who will listen to the user in cochlearimplants (target), the sound will be mixed by a different voice (masker), so thesoundwill lie on. So will cause the level of clarity (intelligibility) of the user / target will be reduced or low.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

The authors of [11] conducted research on the effects of the three algorithms, namely spectral subtraction (SS), aminimum mean squared errorspectral estimation (MMSE), and subspace analysis (SA) by using two types ofnoise with the car and babble variation of Signal to Noise Ratio (SNR). And the results showed that there was no decrease or increase intelligibility score big on the difference in SNR, so it can be said there is no interaction between noisesuppression and SNR.While research Goushen Yu, et al, 2008 tried to eliminate the "musical noise" used threshold block estimation by objective and subjective evaluation. In previous study [7], the subjects not understand the target stimuli (cochlear implant users) in this mixture of stimuli because the target stimuli are less dominant than the masker stimuli (other speech). However, these studies only considerthe subjective evaluation usingsolely mean opinion score (MOS) without considering speech intelligibility index. In another study [4], the stimuli used in English, so there is difficulty in finding subjects. In this study, the stimuli will be mixing with the dominant target stimuli, which the stimuli used in Indonesian. From this study we hope can help cochlear implant’s user to hear voice from other people by used release of masker phenomenon. II.

RELEASE OF MASKING

When some people talk together, listen to only one of them can be difficult. The level of difficulty depends on several factors, including the layout of the speaker, the similarity of their voices, and the message content. Moreover, the aboveexperienced by patients with cochlear implants are in fact the ability to listen voice becomes lower. Perhaps, some people have argument that speech intelligibility problemsin people with cochlear implants while listening in noisy situations can decrease by strengthening the incoming sound signal. But, it will also make the noise environment becomes harder too. From figure 1 it can be seen that both the target and the masker has a varyingamplitude. Sometimes large and sometimes too small. At the time of the target and masker mixed signal will sound overlapping. In this case, intellligibility to target will be reduced. In this process there is the phenomenon where the amplitude of the masker is relatively lower than the amplitude of the target (release ofmasker) as the sign of the line, so there is a moment where the target has a high intelligibility.

153

ISSN: 2088-6578

Yogyakarta, 12 July 2012

III.

METHODS

A. Stimuli The sound database used is the result of sound recording in soundproof room using Indonesian sentence. This database consists of a male voice and female voice of 500 sentences with 44Khz sampling frequency. 2 person who recorded his voice was recorded at different times with the same sentence. There are some example of those sentence: /P-e-r-t-a-n-d-i-n-g-a-n-s-e-p-a-k-b-o-l-a-a-k-a-n-di-g-e-l-a-r-d-i-l-a-p-a-n-g-a-n-k-e-c-a-m-a-t-a-n// /P-e-r-a-w-a-t-s-e-d-a-n-g-m-e-m-b-a-n-t-u-d-o-k-te-r-m-e-r-a-w-a-t-p-a-s-i-e-n-j-a-n-t-u-n-g-i-t-u// /S-i-s-w-a-s-i-s-w-a-s-e-d-a-n-g-k-e-r-j-a-b-a-k-t-im-e-m-b-e-r-s-i-h-k-a-n-s-e-k-o-l-a-h-a-n//

CITEE 2012

Therefore, the voice that will be used as stimuli must process by vocoder.Both target and masker further processed into a channel vocoder to simulate cochlear implant device according to the number of electrodes that activated in 2, 4, 6, 8, 12, 20 channel with the same SNR is 5dB. After processed by vocoder, the voice targets and masker will be combined with some combination of such ratios in table I. Both the target and the masker used is a male's voice. The sound mixing is grouped by a combination of channel and grouped each ten sound. Its help us at tested by different stimuli each ratio. B. Apparatus At this project we used many apparatus. At recorded step, we used Shure SM58 microphone that connected to the E-MU 0404. And at experiment, we used headphones Sennheiser HD650where stimuli are played with a laptop connected to the E-MU 0404. C. Procedures Mixing results and masker the target signal with a comparison such as table Isubsequently tested to the subjects. Each subjects will listen all the ratio which each ratio consist of different 10 sentences. The test is performed two times that by listening to with two ears (diotically headphones) and just one eras (monoaurally). Everything is tested to the 10 subjectss who different in which subjectss were asked to write down what they hear. From the results of this test then we calculated % correct word so we get the best comparison channel targets and masker and also to determine the appropriate use of headphones between monoaurally and diotically. And also to know the exact number of electrodes to be activated on the cochlear implant.

Figure 1 Target (upper l), masker (bottom), and RM is the area of the releaseof maskring

They can done record at the same time, but could be continued another day, because there are 500 sentences that they must say. At this job, they are paid in a matter of time that they need. Stimuli are tested to subjectss is a recorded voice that is processed using an in matlabR2009a downsampling to 16 kHz. In this step, one of those sentences as the voice conversation (masker). And the other voices are used as the original sound (target). In this study, the stimuli will be tested to subjectss with good hearing, so we need for simulations that tested stimuli such as sound from the cochlear implant. 154

D. Subjects Subjects were used for this test are students of Engineering Physics with normal hearing. 10 peoples are male and female that different in each test. Its in order to obtain more valid data, because if used the same subjects and the same sentence, then the subjects can be guessed easily. The total number of subjects’s female and male are 20 peoples, with an average age of 21 years and have normal hearing. Similarly, at the time of database creation, the subjects also paid for they participation in a matter of time that they need. E. Data Analysis The data analysis was performed word recognition (%) which is calculated with this equation: 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑤𝑜𝑟𝑑

% 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑤𝑜𝑟𝑑 = 𝑎𝑚𝑜𝑢𝑛𝑡

𝑜𝑓 𝑤𝑜𝑟𝑑

𝑥 100 %

(1)

From this value then we made graphic to know comparison between monoaurally and dioticallly and between each channel ratio. At that graphic we show error bars too, in order to know plus minus that value.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Yogyakarta, 12 July 2012

IV.

EXPERIMENT

The experiment we have done is testing the stimuli (mixing target and masker with ratio which show at table I) to 10 persons. Subjects would enter to soundproof room with author, and then we ask they to wear headphones. After that we play stimuli as practice, but without telling before if that sound just practice. As practice we give 2 stimuli randomly from dominant target channel each ratio. This practice aims to make subjects usually hear stimuli. Then, we give stimuli as true experiment. But we don’t repeat again stimuli which play as practice. TABLE I. NUMBER OF TARGET-TO-MASKERCHANNEL No. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 14. 15. 16. 17. 18. 19. 20. 21. 22.

Target Channel 2 4 4 6 6 6 8 8 8 8 12 12 12 12 12 20 20 20 20 20 20

MaskerChannel 2 2 4 2 4 6 2 4 6 8 2 4 6 8 12 2 4 6 8 12 20

At this project we done 2 experiment. At the first, subjects hear stimuli with two ears (diotically headphones) and second, with one ears (monoaurally headphones). At this first experiment, we will see respon from subjects if hear that stimuli. After we play one stimuli and subjects hear it, they must write at paper or laptop what sentence they hear. They only hear stimuli once. After they write it, they can hear the next stimuli until 10th stimuli at that channel ratio. And then they can hear the next stimuli from other channel ratio. Subjects that we used at this experiment are different people each test. And if they cannot done this experiment once at same time, they can done in other time. V. DISCUSSION First, we do test with diotically headphone and the result is show in figure 2. From the figure we know that % correct word from each ratio masker and target is high although not 100%. In ratio 2 channel target and 2 channel masker, we get % correct word less than 10%, at this experiment its DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

be the lowest % correct word value, and % correct word at ratio 12-4 channel is highest about 82%. Start at ratio 20-12 until ratio 12-4 the value of % corret word is more than 50 % and increasing until ratio 124 channel , just at ratio ratio between channel target and channel masker low, the % correct word which peoducess is less than 50 %. And low % correct word value happen at same comparison between channel target and channel masker. Number of ratio channel which have % correct word higher than 50 % is more than number of ratio channel which have % correct word lower than 50 %. Just 9 number ratio have less than 50% % correct word, and 12 number ratio have % correct word more than 50 %. Second experiment we try test with monoaurally headphone. We just enable right side of headphones or just right eras. In this experiment we used 10 subjects and the result show in figure 2. From the figure bellow we can see that same as the first experiment, at ratio 2 channel target and 2 channel masker is less than 10 % and its be lowest value at this experiment and ratio 12 channel target, 2 channel masker is highest. Start at ratio 12-12 channel, % correct word increasing more than 50 %, but its no more than first experiment by diotically headphones. At second experiment, the ratio channel which have % correct word less than 50 % is more than number of ratio channel which have % correct word more than 50 %. 11 number ratio have % correct word less than 50 %, and 10 number ratio have % correct word more than 50 %.

Figure 2. Results this experiment by varying the Target-Masker channels number monoaurally-diotically headphones

CITEE 2012

At this experiment subjects just hear stimuli by right ears, so it make % correct word in this experiment worse 155

ISSN: 2088-6578

Yogyakarta, 12 July 2012

than first experiment which they can hear stimuli by two side headphones, so they can hear better than second experiment. From 2 experiment above and from the figure above, we can compare between first experiment by diotically headphones and second experiment by monoaurally headphone, we can see that result from fist experiment is better than second experiment. For example at the lowest % channel number, 2-2 channel, at the fist experiment get 3,71 % correct word, but at second experiment just get 2,18 % correct word. At the next process, we just not use some number of ratio such as 22, 42 and 44 channel which have low % correct word. But if we see from the highest score between they, monoaurally have score higher than diotically. We can see at figure 2, that monoaurally have 83,9 % at 12-4 channel, and diotically just have 81,9 at 12-2 channel as the highest score.

CITEE 2012

[8] Kamath, Sunil D., Loizou, Philipos C “A Multi-Band Spectral Substarction Method For Enhancing Speech Corrupted By Colored Noise”, University of Texas at Dallas. [9] J. S. Lim and A. V. Oppenheim , Dec. 1979, "Enhancement and Bandwidth Compression of Noisy Speech,", Proc. IEEE, vol. 67, No. 2, pp.1586-1604. [10] Kurniawan, Indra Hadi 2010, Pemanfaatan Fenomena Release Of Maskering Untuk Meningkatkan Speech Intelligibility Pada Cochlear Implant, ITS. [11] Hilkhuysen, Gaubitch, “Effects of noise suppression on intelligibility: Dependency on signal to noise ratios”, United Kingdom, 2010. [12] Boll, S.F, 1979. “A Spectral Subtraction Algorithm for Suppression of Acoustic Noise in Speech”, IEEE, Science Department University of Utah.Salt Lake City.

V. CONCLUSIONS This work was intended to evaluate the benefit of people with profound deafness on both ears whether they need both ears implantation. The results suggest that monoaural listening performed worse than that of diotic presentation of the stimuli by exploiting the release of maskering phenomenon. This may be caused by more stream of information of cues on the speech reach the brain to form a better perception. In the on-going research, we investigate several speech enhancement techniques to improve the saliency of the speech cues in the presence of fluctuating background noise. REFERENCES [1] Mueller,Gus, Brief Guide to Modern Hearing Aid Technology, Vanderbilt University, Nashville, Tennessee. [2] Yu, Guoshen, Audio Denoising by Time-Frequency Block Thresholding, IEEE Transactions On Signal Processing, Vol. 56, No. 5, 2008. [3] Nugroho, Anintyo A. Subjective And Objective Measure On Speech Intelligibility By Release Of Maskering Phenomenon. Tugas Akhir, ITS, 2011. [4] Yu, Lang., Loizou, Philipos C “A geometric approach to spectral subtraction”, University of Texas at Dallas. [5] Heng Lu, Zhi., Zong Shao, Huai., Liang Ju, Tai, June 2009 “Speech Enhancement Algorithm Based On MMSE Short Time Spectral Amplitude In Whispered Speech”, Journal Of Electronic Science And Technology Of China, Vol. 7, No. 2. [6] Zhou, David., Greenbaum, Elias “Implantable Neural Protheses1”, New York:Springer, 2009. [7] Hu,Yi, Loizou, Philipos C,” Objective Measures for Redicting Speech Intelligibility in Noisyconditions Based on New Band-Importance Functions”, University of Texas at Dallas. 156

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Remote Laboratory Over the Internet for DC Motor Experiment Aryuanto Soetedjo, Yusuf Ismail Nakhoda, Ibrahim Ashari Department of Electrical Engineering, ITN Malang Jl. Raya Karanglo KM 2 Malang, Indonesia

[email protected] Abstract— This paper presents the Remote Laboratory over the Internet for DC motor experiment. Using a Remote Laboratory, students might access resources on the laboratory from anywhere and anytime. The developed Remote Laboratory introducing the reconfigurable experiment, which allowing the different experiments to be carried out. The experimental results show that the developed Remote Laboratory works properly in controlling and monitoring the laboratory instruments remotely over the Internet almost in real-time. Keywords-R e m o t e Labor at or y ; DC motor; c l o s e d l o o p co nt r o l; La b VI EW; I P- Cam e r a.

I.

INTRODUCTION

Nowadays, learning system for university’s students is based on the Student Centered Learning (SCL) system which provides students with their competences. In Electrical Engineering field, theory and practical skills should be conducted proportionally. Therefore conducting the experiments in the laboratory becomes an important activity. However, some institutions might have a difficult problem for providing such instruments, due to the expensive cost. To overcome the problem, simulation software is usually used to replace the real experiment. The simulation software helps students to better understand the lecture. However it has some limitations, such as [1]: a) The simulation is just a model and can not substitute the real experiment completely; b) It is difficult for students to gain the live experience. The rapid growing in information technology leads to developing a Remote Laboratory, which allowing students to access the laboratory over the Internet. In the Remote Laboratory, the real instruments are still used, but they could be controlled remotely. Students could conduct experiment from anywhere and anytime. Thus the limited instruments in the laboratory could be utilized efficiently. Researches have developed the Remote Laboratory for several laboratory experiments in the field of Electrical Engineering [1-6]. In [2,3] the Remote Laboratory allowed student to wire the basic electrical components (resistor, capacitor, inductor) remotely. The Remote Laboratory for the industrial process controlled by PLC (Programmable Logic Controller) was developed in [4]. In [5,6] the microcontroller laboratory for embedded applications were controlled and monitored via Internet. The camera (WebCam) is commonly employed in the

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Remote Laboratory to provide visual feedback from the laboratory. According to the type of server, the Remote Laboratory could be divided into two categories, i.e. a) use computer server; and b) use embedded server. The first category is commonny adopted, due the the flexibility and easy to be developed using the available software such as MATLAB, LabVIEW, Java, etc. In this paper, we develop a Remote Laboratory for teaching Control System course, particulary the DC Motor experiment. In our Remote Laboratory, we develop the reconfigurable experiment, which allowing students to change the configuration of the experiment kit to perform two different experiments, i.e. motor characteristic and position control system. The rest of paper is organized as follows. Section 2 presents the system architecture, covering hardware and software architecture. The experimental results are described in Section 3. Finally Section 4 presents the conclusions. II.

SYSTEM ARCHITECTURE

Fig. 1 illustrates the architecture of our developed Remote Laboratory. It consists of four main components: 1) DC Servo experiment kit; 2) Hardware interface; 3) Computer server; 4) Camera. The DC Servo experiment kit is a module from EDLaboratory to carry out the experiment on control system, such as: input property of DC Motor and Closed Loop Control System (Speed and Position Control). Hardware interface is used for interfacing between the DC Servo module and computer. It consists of the Configuration Module and Data Acquisition Device (Labjack U12 [7]). LabVIEW software is installed on the computer server to control and monitor the experiment kit via the Internet. LabVIEW provides a tool to monitor and control remote devices via the Internet from a web browser [8]. An IP-Camera (TP-LINK TL-SC3171) is installed on the laboratory for visual feedback. It could be accessed directly from a web browser. Since the computer server and camera server are located separately, a router (TP-LINK TL-MR3420) is needed to forward them to the network.

157

ISSN: 2088-6578

Configuration Module

Yogyakarta, 12 July 2012

DC-Servo Experiment Kit IP Camera

CITEE 2012

Fig. 4 illustrates the configuration of position control experiment. It uses all seven circuits of the module. As shown in Fig. 4, the output position of the motor is fed to Circuit-1(Summing Amplifier) forms a closed loop control. The experiment objective is to observe the response of closed-loop position control system.

Data Acquisition (LabJack)

Router

Computer Server

Internet

Figure 2. DC Servo Module [9]

Personal Computer Figure 1. System architecture of the proposed Remote Laboratory.

A. DC Servo Experiment Kit DC Servo Experiment Kit is a reconfigurable module consists of seven circuits (sub-modules) as illustrated in Fig. 2, i.e. : 1) Summing Amplifier; 2) Pre-Amplifier; 3) Servo Driver; 4) Input Control; 5) Tacho Amplifier; 6) Attenuator; 7) Motor and Position Sensing. Each circuit is equipped with a plug-in terminal for connecting with other circuits. Thus we could configure the sub-modules to perform the different experiments. In this research, we consider only two different experiments: a) DC motor characteristic; and b) Position control. Fig. 3 illustrates the configuration of DC motor characteristic experiment. It only uses four circuits: Servo Driver, Input control, Tacho Amplifier, and Motor & Position Sensing circuits. The wiring connections are shown with the bold lines in Fig. 3. In addition, an amperemeter and a volmeter are used to measure the current and voltage of motor respectively. The speed of motor is measured by RPMmeter of Circuit-5 (Tacho Amplifier). In this experiment, input potensiometer of Circuit-4 (Input Control) is used to adjust the input voltage of the motor driver. The experiment objective is to find the relationship between motor speed and input voltage.

158

Figure 3. Configuration of DC motor characteristic experiment [9]

Figure 4. Configuration of position control experiment [9]

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

DCPower

M-20 DC SERVO Experiment Kit DC Signal Analog Signal Interface

ISSN: 2088-6578

DC Power Supply DCPower

DC Signal Connection Relays

Power Relays

Figure 6. LabVIEW Front Panel. Relay Control

Analog Input

Analog Output

Configuration Module

Digital Output

LabJack U12

PC Figure 5. Configuration module.

B. Configuration Module Configuration module is designed to configure the circuits for a particular experiment, i.e. DC motor characteristic or position control experiments. Fig. 5 illustrates the block diagram of configuration module. The main component of configuration module is relays and analog signal interface. Relays are used to switch the wire connection between circuit’s terminals on the DC Servo experiment kit, while analog signal interface is used to handle the analog signals on the experiment kit. Relays are divided into connection relays and power relays. Connection relays and power relays are used to switch DC signals and DC power on the experiment kit respectively. These relays are controlled by relay control, which is activated by digital signal from LabJack. Analog signal interface is a signal conditioner device for adapting the analog signal level between experiment kit and LabJack. These analog signals correspond to the signal output from voltmeter and amperemeter, and the signal input to input potensiometer. C. Labjack U12 Data Acquisition Device LabJack U12 is a data acquition device that enables communication between PC and real world. Labjack communicates with PC using the USB port. It has 2 Analog outputs, 20 Digital I/O, 8 Analog Inputs. The advantage of using LabJack is that it is fully compatible with LabVIEW software, so it is very easy to install the driver into the computer with LabView Software.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 7. LabVIEW Block Diagram.

D. LabVIEW Virtual Instrument Fig. 6 illustrates the Front Panel of LabVIEW Virtual Instrument of the developed Remote Laboratory. The buttons ON/OFF are used to switch-on relays for connecting the terminals of experiment kit. From the figure we could see that a voltmeter, an amperemeter, a RPMmeter appear on the Front Panel. They replace the real ones used in the laboratory. The components on front panel are configured using the block diagram as illustrated in Fig. 7. III.

EXPERIMENTAL RESULTS

To test our design, we conduct several experiments as described in the following. The instruments used in the experiment are illustrated in Fig. 1. The objective of the test is to show the feasibility of our Remote Laboratory for conducting the experiment over the Internet. Fig. 8 shows the picture captured by an IP-Camera during experiment. It is captured from the client web browser using Mozilla Firefox. Fig. 9 and Fig. 10 show the Front Panel of Remote Laboratory accessed from the web using Google Chrome web browser.

159

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

the motor’s speed. The upper graph on the left is the speed response over the time. From the figure, we could see that there is a small oscillation of the speed response in steady state. It is caused by the inaccuracy of data acquisition device in converting analog to digital signal. The lower graph on the left shows the output position of the motor. The waveform on the graph indicates that the motor rotates in the clockwise direction.

Figure 8. Picture captured by IP-Camera on the web browser.

To configure the position control experiment, buttons on bright/gray lines should be switched OFF and the ones on black lines are switched ON as shown in Fig. 10. The main power switch is also switched ON. The knob indicator on bottom-right (Position Output) shows the position of motor, while the knob control in the left (Position Input) is used to set the setting point. As shown in the figure, the position output of motor follows the setting point. The output response is shown on the lowerleft graph. The graph shows that when the position input is changed, the output changes appropriately with a transient response. The change is also shown in both knob indicator position of the input and output. To observe the performance of the Remote Laboratory, we examine the connection reliability over the Internet. When user accesses the Remote Laboratory, at the beginning the Front Panel (about 500 Kbytes) will be downloaded to the web browser. After the Front Panel is downloaded and displayed correctly, the file transfer for operating the Remote Laboratory is about 1 Kbytes/sec for updating and controlling the Front Panel remotely over the Internet.

Figure 9. Experiment on DC motor characteristic.

In the experiment, we use a server connected to the Internet with speed of 384Kbps/96Kbps ADSL (downstream /upstream), and a user/client connected to the Internet using HSPDA modem. From the experiment, the monitoring and controlling of the Remote Laboratory could be conducted almost in real-time. The delay is lower than one second. This delay is mostly affected by the network traffic and the Internet speed.

IV.

Figure 10. Experiment on position control.

Fig. 9 illustrates the Front Panel when configuring the experiment on DC motor characteristic. As shown in the figure, to carry out this experiment the buttons on the bright/gray lines should be switched ON, while the ones on the black lines are switched OFF, and the main power switch (below the amperemeter) is switched ON. The voltage and current of the DC motor could be read in the Voltmeter and Amperementer shown in top-right of the figure. The RPMmeter shown in bottom-middle shows

160

CONCLUSIONS

In this paper, the Remote Laboratory is developed. It allows for controlling and monitoring the laboratory experiment over the Internet. From the experiments, the developed hardware and software perform well to control and monitor the DC motor experiment remotely. There are a few errors in measurement and position control due to inaccuracy. When controlling and monitoring remotely, a delay is lower than one second in the normal traffic and Internet speed. In future we will improve the accuracy of the measurement system and extend the system to more complex experiments and configurations.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

REFERENCES [1]

[2]

[3]

[4]

S. Gadzhanov, A. Nafalski, and O. Gol, “ A remote laboratory for motion control and feedback devices”. Procs. of Electrotechnical Institute, Issue 247, 2010. A. Mohtar, Z. Nedic, and J. Machotka, “Digitally controlled resitors for the remote laboratory Netlab”, World Transactions on Engineering and Technology Education, Vol. 6, No.1, 2007, pp. 67-70. I. Moon, et al., ”A remote laboratory for electric circuit using passive device controlled”, Procs. of International Conference on Engineering Education, Budapest, Hungary, 2008. A.C. Ammari and J.B.H. Slama, ”The development of a remote laboratory for internet-based engineering education”, Journal for Asynchronous Learning Networks, Vol. 10, No. 4, pp. 3-13.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[5]

[6]

[7] [8] [9]

ISSN: 2088-6578

H. Bahring, J. Keller, and W. Schiffmann, ”Remote operation and control of computer engineering laboratory experiments”, Procs. of Workshop on Computer Architecture Education, 2006. H. Cimen, I. Yabanova, M. Nartkaya, and S.M. Cinar, ”Web based remote access microcontroller laboratory”, World Academy of Science, Engineering and Technology, Vol. 44, 2008. http://labjack.com/u12 http://zone.ni.com/devzone/cda/tut/p/id/3301 Electric-Electronic Trainer (IV) Additional Circuits Part 20, Experimental Manual ED-1010, EDLaboratory.

161

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

COMPARATIVE ANALYSIS OF NEURAL FUZZY AND PI CONTROLLER FOR SPEED CONTROL OF THREE PHASE INDUCTION MOTOR Ratna Ika Putri Electronic Department, Malang State Polytechnic Jl. Soekarno hatta 9 Malang. 65145 [email protected] Mila Fauziyah Electronic Department, Malang State Polytechnic Jl. Soekarno hatta 9 Malang. 65145 [email protected] Agus Setiawan Mechanical Department, Malang State Polytechnic Jl. Soekarno hatta 9 Malang. 65145 agus [email protected] Abstract- This paper present the design and application of neural fuzzy and PI for speed control of three phase induction motor. Both controller are investigated using matlab/simulink. Based on simulation result, the speed response comparison of both controller, the fuzzy controller has shown good performance and settling time at no load condition, but PI controller has better error steady state. In Load change condition and reference variation condition, PI controller has shown better performance. PI controller can be followed reference change by steady stae error is smaller than neural fuzzy controller Keyword : Neural Fuzzy, PI, Controller, Induction Motor I. Introduction In industry, motors play an important role as a driver. In general, the motor is divided into two: AC motors and DC motors. One of AC motor is threephase induction motor has start shifting the use of DC motors in the industry because of the it has several advantages. Some of the advantages of an induction motor is simple and sturdy construction, the reduced maintenance cost, lack of commutator, less volume and weight and lower cost. Besides it has a better performance at high speeds and has a greater torque [2]. But the induction motor also has disadvantages, compared to the DC motor, has several parameters that are non linear, especially the rotor resistance, so controlling an induction motor is more complicated and complex than DC motors. Induction motor speed depends on the given load. In Industrial applications, load changes on induction motor is very large and it has to work on a wide range of speed and torque are not affected by load changes. So we need a controller that can maintain the speed of an induction motor while the load changes. Automatic control is an important role in the industry, with proper control can improve quality, speed of production and lowering production costs.

162

Some controllers can be used include the conventional PI (Propotional Integral) controllers, PID (Propotional Integral Derivative) and IP (Integral propotional). In addition, some studies have been done using the intelligent controller for the regulation of the induction motor is the fuzzy controllers, neural fuzzy and ANN (Artificial Neural Network). PI controller has the advantage that it can generate steady state error is close to zero because it has a component integrator [2][4][10] but produces overshoot and undershoot that required an accurate determination of the parameters to face it. Several methods can be used for the determination of parameters such as trial and error, evolutionary technique based searching and ZieglerNichols method [2][7]. Fuzzy logic and neural network has been widely studied and used for speed control of induction motors. Obstacles encountered in the use of fuzzy controller are very difficult to determining the shape and location of membership functions and determine the rules that correspond to the desired control system. ANN is suitable to use the non - linear load caused by changes or parameter settings that are suitable for induction motors as it has done [8]. Use of neural fuzzy on the speed control of induction motor produces a good performance. The number of layers that are used to determine the performance of the system, neural fuzzy 3 layers produces a better performance than the two-layer [5]. Compared with the conventional PI controller, using fuzzy logic to induction motor speed control has better dynamic performance [1] [2] [3] [4] [10]. In this paper will compare the performance of the neural fuzzy and PI controller as speed control of induction motors. The algorithm is simulated using MATLAB / SIMULINK. Controller testing is done with a no-load conditions, load changes and is expected to give the motor speed will remain constant, and is expected to change the set point and the controller can follow the set point changes

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

II. Induction Motor Model In general, the induction motor has two main parts: stator and rotor. The stator is the stationary part and is usually mounted coil around the main stator which serves as a producer of rotary field and that will drag the rotor to rotate, while the rotor is moving part of the induction motor. In the induction motor also has the magnetic field coil but few in number and in parallel to form a cage, usually called the rotor cage. The working of an induction motor is similar to the work of the transformer, because the transformer works by electromagnetic induction. Therefore, the induction motor can also be considered as a transformer with a rotating secondary circuit. In this research use the squirrel cage induction motors. Torque generated on the AC motor is the result of the interaction between current and flux. In the induction motor, power is supplied only through the stator alone. Current which produce torque / speed and current responsible for producing the flux can not be seen clearly as two separate signals. But mathematically, the method of Field Oriented Control, changes in flux and torque can be separated as two signals are mutually dependent on each other. On the other hand, the alternating current (AC) is a vector of variables that change with time and can be more easily analyzed by using complex numbers that have two axes of the coordinate axes are the real and the imaginary axis. Next, use the equation dq axis (direct and quadrature axis). 3-phase variables (voltages, currents and flux) in induction motors with variable-2 induction into the coordinate axes (dq axis) [5]. The equivalent circuit of an induction motor in the stationary reference frame as shown in Figure 1. Dq equations for induction motor model is given as follow

dψ qs dt

dψ ds dt

⎡ ⎤ r = ω b ⎢v qs + s (ψ mq − ψ qs )⎥ xls ⎣ ⎦ ⎡ ⎤ r = ω b ⎢v ds + s (ψ md − ψ ds )⎥ xls ⎣ ⎦

⎛ ψ qs ψ qr ⎞ ⎟ + xlr ⎟⎠ ⎝ x ls

ψ mq = x M ⎜⎜

⎛ ψ ds ψ dr ⎞ ⎟⎟ + x x lr ⎠ ⎝ ls

ψ md = x M ⎜⎜

1 (ψ qs −ψ mq ) xls 1 (ψ qr −ψ mq ) iqr = xlr 1 (ψ ds − ψ md ) ids = xls iqs =

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

idr =

1 (ψ dr − ψ md ) xlr

Tem =

3⎛ p⎞ 1 (ψ ds iqs −ψ qs ids ) ⎜ ⎟ 2 ⎝ 2 ⎠ ωb

2 Jωb d (ω r / ω b ) = Tem + Tmech − Tdamp (N .m ) dt 2 Jω bm H= 2S b d (ωt / ωb ) 2H = Tem + Tmech − Tdamp dt

(9) (10) (11) (12) (13)

Where:

ψ ij

: Flux Linkage

v qs , v ds

: Stator voltage on q and d axes

v qr , v dr

: Rotor voltage on q and d axes

ψ mq ,ψ md

: Magnetizing Flux Linkage on q and d axes

rs , rr

: Resistance of stator and rotor

xls , xlr

: Leakage reactance of stator and rotor

iqs , ids

: Stator current on q and d axes

iqr , idr p J Tem

Tmech

(1)

ωe

(2)

ωb ωr ω bm

⎡ ⎤ (3) dψ dr ω r = ωb ⎢vdr − r ψ qr + r (ψ md −ψ dr )⎥ dt ω x b lr ⎣ ⎦

ISSN: 2088-6578

: Rotor current on q and d axes : Pole number : Inertia momen : Electric torque : Mechanic torque : Stator angular frequency : base angular frequency : Rotor angular speed : Base mechanic frequency

(4)

(5) (6) (7) (8)

163

ISSN: 2088-6578

Yogyakarta, 12 July 2012

SP

CITEE 2012

+

-

KP.e

+

+

Proses

RP

KI.∫e

Figure 2. PI Controller Diagram Block

Figure 1. The Equivalent Circuit of d-q induction motor model with Stationary Reference Frame The motor used in this case study is 1,5HP, 220V, two pole 60Hz. Motor having parameter: stator resistance (Rs) = 7.13Ω, rotor resistance (Rr) = 8.18 Ω, stator inductance (Xls) = 9.45 Ω, rotor inductance (Xlr) = 9.45 Ω and mutual inductance (Xlm) = 189.65 Ω

PI controller parameters can be determined by several methods such as trial and error, evolutionary technique based searching and Ziegler-Nichols method [2]. In this study using the Ziegler Nichols method, Ziegler - Nichols proposed a rule to determine the value of Kp, Ti and Td is based on the response transient characteristics from a given plant. The first method of Ziegler - Nichols determines the value of KP, Ti, and Td is based on graphical analysis of the S curve as shown in Figure 3. Rules of the intersection of the straight line occur in the condition of the S curve linear system response. Accuracy in making this intersection is important because it determines the parameters T and L is the reference of the controller [7]. By using the Ziegler-Nichols formula as shown on Table 1, then the value KP = 0, 6, T1 = 6.

III. Design of PI controller One of conventional controller the widely used is a PI. PI controller has the advantage that it can generate steady state error is close to zero because it has a component integrator [2] [4]. In addition to not having derivative component of the system will be more stable in steady state conditions against interference. Weaknesses of the PI controller produces overshoot and undershoot in the response of the system. Furthermore compared with controller PID, PI controllers are slower to achieve set point and have a slower response if interference occurred [4]. Based on ([1] [2] [3] [4] fuzzy logic is used as a controller in speed control of induction motors have a better dynamic performance and transient response. Compared with PI controller, settling time, the efficiency of induction motor reliability increases with the use of fuzzy controller. To overcome weaknesses PI controller and improve the performance must determine the appropriate parameters of the controller. PI controller parameters consists of a proportional constant (Kp) and the integrator constants (Ki). PI controller block diagram as shown in Figure 2 and mathematically PI controller output can be expressed by (Basilio, 2002) U(t) = Kp * e(t) + Ki ∫e(t)dt (14) U(t) = Kp * e(t) +

Kp ∫e(t)dt Ti

(15)

e(t) = SP (t) – RP (t) (16) SP is reference speed and RP is the real speed of induction motor

164

Figure 3. S Curve for Graphical Analysis of Ziegler Nichols Method Table 1. Formula of Ziegler-Nichols

IV. Design of Neural Fuzzy Controller Neural fuzzy controller is an intelligent controller that has been widely studied and used for speed control of induction motors. Neural fuzzy is a combination of neural network and fuzzy logic. Fuzzy logic uses physical properties of the plant, but in fuzzy logic there is no learning process in addition to the determination of membership functions and rules used in fuzzy logic is very important to system performance. While the neural network has been developed to incorporate the learning process and plant identification, but for the arrangements that require a rapid process then this method would require a longer

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

time [9]. Neural network is most appropriately used to non-linear systems, such as induction motors, because it has a learning process. According to [8] neural network and fuzzy logic has several properties that make it an alternative to induction motor control. One of the properties of parallel structures that are high if the network uses a larger number of layers and it has the simplicity of the neural network computation required the formation of neurons in each network. The number of layers used to determine the performance of neural fuzzy controller. The more the number of layers, the performance of the controller is better. According to [5] with a three-layer fuzzy neural generate dynamic performance and steady state error better than fuzzy neural with two layer. In this study uses fuzzy neural with 3 layer with the number of neurons is 50. Neural fuzzy controller block diagram as shown in Figure 4

ISSN: 2088-6578

trapezoidal and triangular. Fuzzy logic rules are used as shown in Table 2. Table 2. Rule of Fuzzy Logic e Δe N Z P N K K K Z M M M P P P P V. Simulation Using Matlab Simulation using simulink matlab. The design of neural fuzzy using fuzzy toolbox which available on simulink and m-file for neural network algorithm which carried in to simulink. The simulink model for neural fuzzy shown on figure 5. While Pi controller has been designed by simulink matlab and shown on figure 6.

Σ

Figure 4. Blok diagram neural fuzzy Kontroler The signal of controller that regulates the motor speed is a combination of output control signal of fuzzy logic and the neural network. Neural controller has two inputs are designed with the set point and system response (Y (k)) with an output control signal. While the input to the fuzzy controller derived from the error and delta error and an output control signal. Control signal used to drive an induction motor (U (k)) is U (k) = Uneural + Ufuzzy (18) Neural network controllers are designed using a Multilayer Perceptron Neural Network with Back error propagation. Network has two layers, namely the output of the set point input and output of the system response, an output layer of the control signal and one or more hidden layers. Activation function used for input and hidden layer are logarithmic sigmoid while output neuron using a linear activation function. Fuzzy controller consists of two input variables and one output variable. Input variables consist of the error (error, E) and error change (change of error, CE). Error is the difference between the desired speed references (set point) to the actual speed of the motor, which has been converted into a voltage scale. Change of error is the error margin is now with the previous error. Determination of membership functions on input and output variables by using the try and error based on experience. The greater the number of membership functions of fuzzy controller settings will be more thorough but requires a longer processing time. Error input variable has three membership functions of

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 5. Design of Simulink Model For Neural Fuzzy

165

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Speed Response With Random Load Variation 1000 reference speed

Speed

800 600 response with PI controller 400 200 0

0

1

2

3

4

5 Time

6

7

8

9

10

8

9

10

(a) Speed Response 10

Control Signal

8 6 4 2 0

0

1

2

3

4

5 Time

6

7

(b) Control Signal Figure 8. Speed Response With PI Controller at random load change Speed Response with Reference Variation

1200 1000

Figure 6. Design of PI Simulink Model Speed

VI. Result Simulation and Discussion Simulations performed using MATLAB Simulink to several conditions: no-load condition system, by giving random loads and provide set point changes. From the simulation results will be compared the system performance using a neural fuzzy and PI controllers. Figure 7 to 9 show the response and the induction motor speed control signal generated by the PI controller at no load conditions, random load and by providing a set point change.

0

0

1

2

3

4

5 Time

6

7

8

9

10

8

9

10

(a) Speed response Control signal PI controller with reference variation 10

Control signal

8 6 4 2 0

800

Speed

response PI controller

200

reference speed

600

600 400

Speed Response at No-Load

1000

reference speed

800

0

1

2

3

4

5 Time

6

7

(b) Control Signal Figure 9. Speed Response With PI Controller at reference variation

response with PI controller

400 200 0

0

1

2

3

4

5 Time

6

7

8

9

10

8

9

10

(a) Speed Response 10

Control Signal

8 X: 5.109 Y: 5.557

6 4 2 0

0

1

2

3

4

5 Time

6

7

Figure 7 shows the response speed of an induction motor without a load, the system response have overshoot of 0.875% with steady state error of 0.6%. By providing a reference speed of 800rpm, response achieved steady state conditions with settling time 2.7s. PI controller can maintain a speed according to the reference, although given the random load changes. The controller can follow the load changes with the appropriate control signals as shown in Figure 8. Figure 9 shows the speed response when given set point changes. The controller can follow the change by providing the appropriate control signals. Steady state error that occurs same with no load condition.

(b) Control Signal Figure 7. Speed Response with PI Controller at no load

166

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

fuzzy. By providing a reference speed of 800rpm, the system response does not generate overshoot with the steady state error of 1.9% with settling time 1.8s. Fuzzy neural controller can maintain a speed corresponding to the reference, although given the random load changes. The controller can follow the load changes with the appropriate control signals as shown in Figure 11. Fuzzy neural controller can not keep up set point changes as shown in Figure 12. Compared with PI controller, fuzzy neural controller generates the response system have better settling time. But the steady state error of PI controller is better than the neural fuzzy. Under conditions with changes in load and set point, the PI controller has a better performance with steady state error is smaller than the neural fuzzy.

Speed Response with Neural Fuzzy Controller at No Load

1000

reference speed 800

Speed

600

respon with neural fuzzy controller

400 200 0

0

1

2

3

4

5 Time

6

7

8

9

10

8

9

10

(a) Speed response Control Signal Neural Fuzzy Controller at No Load 7.5

Control signal

7 6.5 6 5.5 5

0

1

2

3

4

5 Time

6

7

VII.

(b) Control Signal Figure 10. Speed Response with Neural Fuzzy Controller at no load

reference speed Control Signal

800 600

respon with neural fuzzy controller

400 200 0

0

1

2 3 4 5 6 7 8 Time Control Signal Neural Fuzzy Controller With Load variation

9

10

7.5

Reference [1]

Control signal

7 6.5 6 5.5 5

0

1

2

3

4

5 Time

6

7

8

9

10

Figure 11. Speed Response with Neural Fuzzy Controller at Random Load Change Speed Response With Reference Variation 1200 1000

reference speed

Speed

800 600

response with neural fuzzy controller

400 200 0

0

1

2

3

4

5 Time

6

7

8

9

10

Control Signal Neural fuzzy Controller With Reference Variation 7.5 7 Control Signal

Conclusion

The speed control of an induction motor by the neural fuzzy technique has been investigated in this paper and also PI controller. The speed response comparison of both controllers, the fuzzy controller has shown good performance and settling time at no load condition, but PI controller has better error steady state. In Load change condition and reference variation condition, PI controller has shown better performance. PI controller can be followed reference change by steady state error is smaller than neural fuzzy controller

Speed Response With Random Load Variation

1000

ISSN: 2088-6578

6.5 6 5.5 5

0

1

2

3

4

5 Time

6

7

8

9

10

Figure 12. Speed Response with Neural Fuzzy Controller at Reference Variation Figure 10 shows the speed response of an induction motor without a load by using a three-layer neural

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Tripura P & Srinivasa K, “ Fuzzy Logic of Three Phase Induction Motor Drive”, World Academy of Science, Engineering and Technology 60 2011. [2] Kalhoodashti H.E & Shahbazian M, “ Hybrid Speed Control of Induction Motor Using PI and Fuzzy Controller”, International Journal of Computer Application, Volume 30, 2011 [3] Pahlavani A.M.R & Alshehabi.A, “Comparison Between Fuzzy and PI Controller Aplied to Six Phase Induction Motor”, International Power System Conference, Iran, 2011. [4] Pundaleek B.H, Manish G.R, Vijay Kumar, “Speed Control of Induction Motor: Fuzzy Logic Controller v/s PI Controller, International Journal of Computer Science and Network Security, Vol 10 No 10 October 2010. [5] Ratna I.P, Mila F, Agus S, “Neural Fuzzy for Speed Control of Three Phase Induction Motor”, International Journal of Computer Science and Network Security, Vol 10 No 10 October 2010. [6] Basilio J.C & Matos S.R, “Design of PI and PID Controller With Transient Performance Spesification”, IEEE Transactions on Education Vol 45. 2002. [7] Ratna Ika Putri, “Perbandingan Kinerja Kontroler Adaptif Fuzzy dan PID pada Pengaturan Kecepatan Motor Induksi Tiga fasa”, Prosiding Seminar Nasional Teknologi Informasi dan Aplikanya, Politeknik Negeri Malang, 2011. [8] Zerikat, Benjebbar, Benouzza, “Dynamic Fuzzy-Neural Controller For Induction Motor Drive”, Proceedings of World Academy of Science, Engineering And Technology, Vol 10 Desember. 2005. [9] Vasudeyan & Arumugam, High Performance Adaptive Intelligent Direct Torque Control Scheme For Induction Motor Drive. KMITL Sci. Tech. J. Vol. 5 No. 3 Jul.-Dec. 2005 [10] Ibrahim Z & Levi E, September 2002. Comparative analysis of Fuzzy logic and PI speed Conntrol in High Performance AC Drives Using Experimental Approachi, IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, VOL. 38, NO. 5

167

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Programmable Potentiostat Based ATMEL Microcontroller for Biosensors Application Erry Dwi Kurniawan and Robeth V. Manurung Research Center for Electronics and Telecommunication – Indonesian Institute of Sciences Address: Kampus LIPI, Jl. Sangkuriang Gd 20 Lt 4, Bandung, Indonesia, 40135

Email: [email protected] Abstract—This paper presents design and implementation of programmable potentiostat for biosensorsapplication. Potentiostatsare used in electrochemical techniques to identify, quantify, and characterize chemical reaction such as biosensors application. Potentiostat usually tends to be large in size and expensive. Therefore, it was desirable to design and build a small and inexpensive potentiostat that could be controlledby computer. The result of this research was a prototype of potentiostat that the reference of electrical potential for biosensors could be controlled and the output currents of biosensor could be recorded using computer. The resolution of reference of electrical potential was 0.022 volt per bit. The characteristic of output signals was linearvoltagesand proportional with changes of current with range 0-100uA. Keywords-potentiostat, microcontroller, biosensor

I.

INTRODUCTION

A potentiostatis the electronic hardware required tocontrol a three electrode cell and run most electro analyticalexperiments[1].This equipment is fundamental to modern electrochemical studies using three electrode systems forinvestigations of reaction mechanisms related to redox chemistry and other chemical phenomenon. Potentiostats are used in electrochemical techniques to identify, quantify, and characterize chemical reaction such as biosensors application. Potentiostat usually tends to be large in size and expensive. Most early potentiostats could function independently, providing data output through a physical data trace.Modern potentiostats are designed to interface with a personal computer and operate through a dedicatedsoftware package. The automated software allows the user to rapidly shift between experiments andexperimental conditions. Just as important the computer allows data to be stored and analyzed more effectively,rapidly, and accurately than historic methods.Therefore, it was desirable to design and build a small and inexpensive potentiostat that could be controlled using computer.

II.

BASIC CONCEPT OF POTENTIOSTAT

Most of electrode configuration of the biosensor usually uses three electrode techniques. This technique was used to avoid voltage drop in their electrode and more stable. In a three-electrode technique of biosensor, each electrode has a specific uses. The working electrode

168

(WE)responds to the target, creating a current flow that is proportional to the chemical concentration. This current must besupplied to the sensor through the counter electrode. The reference electrode (RE)is used by the potentiostatic circuit to maintain a fixed potential at theworking electrode. The working electrode potential must be maintained at the same potentialas the reference electrode potential for unbiased sensors, or with an offset for sensors thatrequire biasing. The auxiliary/counter electrode (CE)completes the circuit with the working electrode, reducing somechemical species (normally oxygen) if the working electrode is oxidizing. The potential on this electrode is not important, so long as the potentiostat circuit can provide sufficient voltage andcurrent to maintain the working electrode at the same potential as the reference electrode [2]. The potentiostat has two tasks. They are to measure the potential difference between working electrodeand reference electrode without polarizing the reference electrode, and to compare the potentialdifference to a preset voltage and force a current through the counter electrode towards theworking electrode in order to counteract the difference between preset voltage and existingworking electrode potential. Fig. 1 shows basic concept of a three-electrode amperometric electrochemical sensor and a potentiostat.Therefore,potentiostat is capable of imposing electrical potential waveforms across a working electrode relative to a reference electrode and measure resultant current through the cell at a third electrode [3].

Fig.1 Basic concept of a three-electrode amperometric electrochemical sensor and a potentiostat [4] Commercially available potentiostats, potential different between reference electrode and counter electrode just can be set manually. Moreover, they tend to be large in size and expensive (prices over several

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Yogyakarta, 12 July 2012

In this research, we will present a design for this potentiostat using the ATMEL microcontroller (Atmega32), an external digital-to-analog converter (DAC), and a series of operational amplifiers (op-amps). We will utilize the USART (Universal Serial Asynchronous Receiver Transmitter) for communication between microcontroller and computer. A block diagram of the system of programmable potentiostat is presented in Fig. 2.

+v cc

+v cc

U1D 5 6 7 8 9 10 11 12

VR1D 13 14

VDD VREF+

A1 A2 A3 A4 A5 A6 A7 A8

+v cc 10k

1 R1D

3

15

VREF-

3

2k2 4

IOUT-

2

7 1

DESIGN

a current, so we must connect to the amplifier for converting from current to the voltage. Schematic of D/A Converter is shown in Fig. 3.

U2D

+

6

v dac

LM741 4 5

III.

ISSN: 2088-6578

2

hundred million rupiahs). They also tend to be laboratorybased and not field-portable based for on-site measurement [3]. For this reason, it was desirable to design and build a small and inexpensive programmable potentiostat.

FROM MICRO CONTROLLER

CITEE 2012

1 2

NC GND

16 3

COMP VEE

C1D 100nF

DAC0808

-v cc R2D 4k7

Fig. 3 Schematic of D/A Converter using DAC0808 3. Signal Processing The output of D/A Converter is connectedto a levelshifter circuit, shown in Fig. 4. The output voltage, Vref, at the level-shifter circuit is a function of the input voltages at the negative and positive terminals(i.e. the DC offset voltage and output D/A Converter). -v cc

1

6

+

R3D

12k

5

R6D 8

+5v

1k

VR3D 1k

1

10k 2

TL082 7

v ref

U3DB

8

VR2D

1

+5v

R5D

-

3

4

2 12k

+

v dac

U3DA TL082

-

4

-v cc R2D

10k

+v cc R4D

2

+v cc R7D

12k 12k

Fig. 4 Schematic of level-shifter amplifier The voltage range and polarity of the voltage can be change by changing the potentiometer settings (VR2D and VR3D) connected to the amplifier. An illustration of the output voltage of level shifter is shown in Fig. 5. 5

1. Biosensor

Output of D/A Converter

4 V o lt a g e (V )

A. Hardware Design The following is a brief description of some major components that we used in our hardware design of the system.

3

3

Fig. 2 Block diagram of the system of programmable potentiostat

3 2 1

Fig.3 Basic concept of biosensor 2. D/A Converter The purpose of having a D/A Converter is mainly to convert digital output from microcontroller to analog voltage. D/A Converter used in this research are DAC0808, an 8-bit D/A Converter. The output of DAC is

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

0

0

5

10

15

20

25 Time(s)

30

35

40

45

50

20

25 Time(s)

30

35

40

45

50

4

Output of level-shifter amplifier

2 V o lt a g e (V )

A biosensor can be generally defined as a device that consists of a biological recognition system, often called a bio-receptor, and a transducer. The interaction of the analyte with the bio-receptor is designed to produce an effect measured by the transducer, which converts the information into a measurable effect, such as an electrical signal. The illustration in Fig. 3 describe basic concept of biosensor.

0 -2 -4

0

5

10

15

Fig. 5 Output voltage of DAC0808 and level-shifter amplifier The output of level shifter is a reference voltage for biosensor. Because of chemical reaction of biosensor, current output from biosensor will be generated. The current output has to convert to voltage as an input to the microcontroller. Range input voltage for ADC microcontroller is 0-5 volt (using Vcc as reference ADC). The specification of potentiostat design was 0-100 microampere as input from biosensor. So, this potentiostat was designed using gain of 10000 for current-to-voltage (I/V) Converter and gain of 5 for inverting amplifier. The circuit of signal processing for this purpose was shown in Fig. 6.

169

ISSN: 2088-6578

Yogyakarta, 12 July 2012

2

SENS1 BIOSENSOR R5S

VR4S 50k 3

1 2 3

1 12k +v cc +v cc

ce

1

we

2

vcc 12k

-v cc

2 +v cc

VR2S

1

-vcc 4

R2S 10k

6

-vcc 4 8 1

TL081

3

VR5S 2 5k

VR6S 3

5k -v cc

1

vcc -

U2S CA3140 re 3 2

VR3S 20k

R3S 3

1

4k7 2

5 7 +

1

An ATMEL ATmega32 microcontroller was programmed using C language. Fig.7 shows flowchart of microcontroller programming. Parameters that can be controlled by computer are reference voltage of potentiostat and period of time. Therefore, first microcontroller will initialize all needed function such as USART for serial communication, ADC for read analog voltage from sensor, timer, and PORT register for input of D/A converter.

VR1S

1 8 -vcc 4

2

-v cc

C1S 100nF

+v cc

-v cc

adc

6

-

1

5k -v cc

-v cc

6

+

2

-

-vcc 4 8 1

+

2 10k

3

3 R1S v ref

U4S TL081 3

R4S

2

8

7 5

U3S CA3140 3 +

U1SA

7 5

vcc

vcc

+v cc

CITEE 2012

3

5k -v cc

Fig. 6 Signal processing circuit 4. Microcontroller Unit Microcontroller unit (MCU) in the system is a miniaturizedcomputer that is programmed to execute the behavior of the potentiostat. An ATMEL ATmega32 microcontroller was using in this research having an in build-in 10 bit analog-to-digital converter (ADC) for measuring analog output voltages from signal processing circuit. Algorithm of microcontroller programming will be described in Software Design section. 5. Power Supply This system is designed using a voltage of 5 volts DC for microcontroller, +12 and -12 volts for operationalamplifier. The power from AC power supply will be convert to DC using diode rectifier and regulate using LM7805 for 5 volt and LM7812/LM7912 for double supply +/- 12 volt.

Then, the microcontroller will wait for input parameter from user. All parameters are input from computer that the potentiostat will begin to operate. The applied voltage and current will be recorded and displayed on the software in computer. Upon completion the system will remain idle and wait from another command from user.

IV.

TEST AND RESULTS

The results weredescribedbelow:

of

this

system

testing

A. Digital-to-Analog Testing The testing was done by giving 8-bit digital input from port of microcontroller to the DAC circuit. Because of 8-bit digital, the output of microcontroller has variation from 0-255. The output voltage of DAC circuit was set max 5 volt. The result was shown in Fig 8. 5 4.5 4

B. Software Design The following is a brief description of algorithm of microcontroller programming.

Voltage (V)

3.5 3 2.5 2 1.5 1 0.5 0

0

50

100

150

200

250

Bit DAC

Fig. 8 Result of DAC circuit test From Fig 8, the graphic can be formulated with equation Y = 0.022X + 0.0015. It is shows a proportional relationship between the value of bit DAC from microcontrollerandoutput of DAC circuit. The output is linear and have resolution 0.022 volt per bit with correction approximately 0.0015. For the great resolution, adding DAC resolution is needed, such as 14 bit or more.

Fig. 7Flowchart of microcontroller program

170

B. Signal Processing Circuit Testing The testing was conducted by giving input current from current source to the signal processing circuit. The result of this testing was shown in Fig 9 and Fig 10. From graph in Fig 9, it shows a proportional relationship between input current and output voltage. The output signal of I/V Converter was linear and proportional with changes of current. The output of inverting-amplifier was linear and proportional with the input from output of I/V converter.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

0

-0.2

Voltage (V)

-0.4

-0.6

-0.8

-1

-1.2

-1.4

0

10

20

30

40

50 60 Current (uA)

70

80

90

100

Fig. 8 Characteristic of I/V Converter circuit 6

Output Voltage (V)

5

ISSN: 2088-6578

appear and if it wasn’t connected, we have to check connection and select suitable port. In this software, we could select type of techniques that we will apply. There were two types: cyclic voltametry and amperometry. In amperometry, reference voltage that was applied to biosensor was constant. We can set this voltage from -2.5 to 2.5 volt for a period of time. In cyclic voltametry, reference voltage that was applied to biosensor was a triangular waveform. We can set the initial voltage, the end voltage, and the period of time. The output of biosensor can be read by microcontroller then the data will be sent to the computer. The data will be displayed on data logger and recorded in computer.

4

V.

3

2

1

0 -1

-0.9

-0.8

-0.7

-0.6 -0.5 -0.4 Input Voltage (V)

-0.3

-0.2

-0.1

0

Fig. 9 Characteristic of Inverting Amplifier circuit C. Software A screenshot of software potentiostat is shown in Fig. 10. Software of potentiostat was build using Java NetBeans IDE 6.9.1.

CONCLUSION

The programmable potentiostat developed in this paper have shown the desired analytical range and satisfactory precision in measurements. Electrical potential of potentiostat have resolution 0.022 volt per bit. The output signal was linear and proportional with changes of current with range 0-100uA. It is easy to use. It provide innovative and inexpensive solution in biosensor application which reducing the size and cost of potentiostat. It was portable and suitable for use in laboratory-basedin universities or field research.Beside for biosensor measurement, it could be used for electro polymerization process with cyclic voltametry techniques.

ACKNOWLEDGMENT The authors gratefully acknowledge the receipt of a grant from Program Kompetitif Lembaga Ilmu Pengetahuan Indonesia (LIPI) 2012 which enabled us to carry out this work. REFERENCES [1] [2] [3]

[4]

[5]

[6]

Fig.10 Display of software of programmable potentiostat First, we start communication between potentiostat and personal computer with connecting port COM at personal computer. If it was successfully connected, dialog message (“Connected Successfully”) will be

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[7]

A.J. Bard and L.R. Faulkner, “Electrochemical Methods, Fundamental and Applications”, John Wiley & Sons, 1980. Alphasense Application Note 105-03, “Designing a Potentiostatic Circuit”, Alphasense Limited, 2009. Gopinath A.V. and Russell D., “An Inexpensive Field-Portable Programmable Potentiostat”, The Chemical Educator, vol. 11, no.1, 2006. “Current-Mirror-Based Potentiostats for Three-Electrode Amperometric Electrochemical Sensors”, IEEE Transaction on Circuits and Systems-1:Regular Papers, vol. 56, no.7, July 2009. V. G. Sangam and B.M. Patre, “Performance Evaluation of an Amperometric Biosensor using a Simple Microcontroller based Data Acquisition System”, International Journal of Biological and Life Sciences 2:1 2006. P.R.H. Rodriguez, C.A.G. Vidal, A.M. Perez, S.A. Sanroman, “Measuring System for Amperometric Chemical Sensors Using the Three-Electrode Technique for Field Application”, Journal of Applied Research and Technology, vol.1, no. 002, 2003, pp.107113. Friedman, E. and Hartoto A., “Low-Cost Portable Potentiostat for Biosensing Applications ”. Cornell University.

171

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Dummy State: An improvement model for hybrid system Sumadi1,2), Kuspriyanto1), Iyas Munawar1) 1)

2)

School of Electrical Engineering and Informatics (STEI – ITB) Ganesha 10 St., Bandung 40142 Email: [email protected]; [email protected];

Department of Electrical Engineering, University of Lampung (Unila) Prof. Sumantri Brojonegoro No. 1 St., Bandarlampung 35145 Email: [email protected]

Abstract—In the paper we proposed an improvement model for hybrid system in Hybrid Automata formalism. In Hybrid system, dynamical system is represented by several operational modes. Dummy state is inserted to normally mode in order to make the hybrid system reliable whether some typical faults took place. Dummy state represents the hybrid system in the present of fault. The implementation uses Mixed Logical Dynamical (MLD) model to represent the hybrid system behavior. Keywords-component; hybrid system; operational mode; dummy state; reliable; MLD.

I.

INTRODUCTION

Hybrid system is interpreted as a system that contains interaction between continuous subsystem and discrete event subsystem[1]. In this terminology the model of hybrid system should contain two subsystems. Some models have been introduced by many researchers. The first approach is Hybrid Automata (HA). In this approach hybrid system is considered like finite state machines by associating with each discrete state a continuous state model[2]. The second one is the Switched System in which hybrid system look likely a group of controllers activated in certain time in term of supervisory control system. It is extended to Piecewise linear (PWL) or Piecewise Affine (PWA) Model where there is jumping condition in a certain time in which some parts of system considered as linear systems [3]. Time Automata[4][6] and Time Petri Nets[5][6] are introduced by finite state machines approach that they are focused on jumping condition and reachability analysis. Discretely Controlled Continuous System also is used to express the controlling continuous system via a purely discrete event controller[7]. Mixed Logical Dynamical (MLD) Model used inequalities in proportional calculus logic to define the transition between states[8]. The latest model proposed is Complementarity Model, in which based on the complementarily between output vector (y) and input vector (u) considered are equal in length [9]. Finally, the latest model is Hybrid Inclusions (HI). HI is a result of natural extension of differential inclusion in the sense that invariants, guards, and reset are important properties

172

in hybrid phenomena[10]. For the time being, close to the application of hybrid system, Mixed Logical Dynamical (MLD) model is the most appear model used by researchers. In 2001, the conversion technique between some models was available in [11]. This assists us to select suitable model related to hybrid system application. We organize the paper in the following sequences. Part I is an introduction to models for hybrid system. In part II, we extend hybrid system in their properties both in control theory and computer science perspective. Our propose model for hybrid system with dummy state is explained in part III. Finally in part IV, we present the discussion and future work which support this research. II. HYBRID SYSTEM PROPERTIES Hybrid system designed requires the understanding the complex interaction between discrete dynamic and continuous dynamic[1]. A proper modeling format must involve (at least) the abstraction of the changing between continuous values (temperatures, positions, velocities, currents, voltages, etc.) and discrete-signals (operation mode, position of switch, alarm on or off, etc.).

Figure 1. Hybrid dynamical system, source: [1]

There are six types of signals in Figure 1., i.e.: y(t) is a continuous output signal; w(t) is discrete output signal; x(t) is a continuous (n-dimensional) state vector; q(t) is discrete state; u(t) is a continuous input signal; and v(t) is discrete input signal. There are four points to characterize hybrid system[12] i.e.: (1) discrete event make system in continuous subsystem stages and intermitted leap change. However, the continuous subsystem takes evolution in a period of time; (2) when the continuous variable subsystem close

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

to threshold, the system jump to other mode. The threshold is defined as boundary surface in continuous variable subsystem; (3) the characteristic of continuous variable in every mode is different; (4) the moving between mode can be sequence, selection, and concurrency. Bracnikey et al., 1994 in [13] had classified the hybrid system. The classification is based on the cause transition between the discrete states in hybrid system which made the difference behavior. TABLE 1. HYBRID SYSTEM CLASSIFICATION Category Autonomous Switching Autonomous Jump (reset) Controlled Switching Controlled jump (reset)

Explanation Flow condition change on hitting specified region border Continuous state change on hitting specified region border Follow condition change in response to control command Continuous state change in response to a control command Source: [13]

In order to get fully understanding the hybrid system behavior, we inform hybrid system properties. One property comes from control theory perspectives in part A and the other one comes from computer science perspective in part B. A. Control Theory Perspective 1) Dynamical system The dynamic of hybrid system can be frequently found in state space representation. The system behavior is represented by a set of differential or difference equations and it depends on the time of observation. The system representations such as Differential or Difference Equations (DE), Signal Flow Graph (SFG), Transfer Function (TF), Block Diagram (BD) are changeable to state space representation by using specific techniques and vice versa[14]. Recently other representation, e.g., Bond Graph (BG), is also interested representation to describe the system behavior in the presence of fault [15]. 2) Tool and Analysis Control Tool and analysis which are available for control system design can also be implemented for hybrid system. The selected control technique, e.g.: PID control, adaptive control, optimal control, non-linear control, intelligence control (fuzzy control, artificial neural nets, genetic algorithm), can be used depends on the wanted behavior in certain mode of hybrid system. As early expressed, hybrid system has multi modes, a control scheme in each operational mode could be different. Controllability [16] and Observability [17][18] analysis are important aspect to hybrid system. These analyses give a chance and the possibilities to control hybrid system. Meanwhile, the discontinuity and stability analysis are also still crucial and important issues in

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

hybrid system. Interested reader can read two papers in [19] and [20].

3) Mixed Logical Dynamical (MLD) Model Hybrid system in Hybrid Automata class can be transform to Mixed Logical Dynamical (MLD) Model[8]. MLD Model is produced by translating propositional logic into mixed integer inequalities. MLD is a framework for modeling and control system described by interacting physical laws, logical rules, and operating constraints. It was proposed by Bemporad, A. in 1999. The hybrid dynamical system represented as + + 1 = = +

(1a)

where: q (integer number) represents an operational mode or discrete state; x(k) is a continuous evolution of system. It can be expressed as: +1 = =

+

+ +

+

+ +

δ δ

+

+ +



(2a) (2b) (2c)

where x(k) can contain both real and Boolean (i.e. 1 or 0) components (y(k) and u(k) have a similar structure), and where z(k) and δ(k) are respectively real-valued and Boolean auxiliary variables. The inequalities in (2c) have to be interpreted component-wise. B. Computer Science Perspective There are four properties in Computer Science Perspective to explain the hybrid system behavior, i.e.: sequence, safety, liveness, and ensemble [21][22]. 1) Sequence Properties (Signal) ≡set of all piecewise continuous signals : 0, # → %& , # ∈ 0, ∞) * ≡set of all piecewise continuous signals +: 0, # → *, # ∈ 0, ∞) a) Sequence property ≡ ,: * → -./ 0, 123405 e.g.: ./ 0, + . ∈ -1,35, . ≥ . + 3 , ∀. , +, = 6 12340, :.ℎ0/<=40 A pair of signal: +, ∈ * satisfies p if p(q,x)=true A hybrid automaton H satisfies p if p(q,x)=true, for every solution (q,x) of H b) Sequence analysis ≡Given a hybrid automaton H and a sequence property p Show that H satisfies p, when this is not the case, find a witness such that p(q,x)=false (q,x)∈ *

173

ISSN: 2088-6578

Yogyakarta, 12 July 2012

In general for solution starting on given set of initial states, >? ⊂* %& 2) Safety properties Given a signal, : 0, # → %& , ∗ : 0, # ∗ → %& is called a prefix to x if ∗ # ≤ #& ∗ . = . ∀. ∈ 0, # ∗ Safety properties ≡a sequence properties p that is a) Nonempty,i.e., ∃ +, such that p(q,x) = true b) Prefix closed, i.e. given signals (q,x) , +, → , + ∗ , ∗ For every prefix (q*,x*) to (q,x) c) Limit closed, i.e. given an infinite sequence of signal, (q1,x1),(q2,x2), (q3,x3), etc. Each element satisfying p such that (qk,xk) is a prefix to (qk+1,xk+1), ∀k e.g.: “Something bad never happens.” a) Non trivial b) a prefix to a good signal is always good c) If something bad happens, it will happen in finite time 3) Liveness Properties Given a signal, : 0, # → %& , ∗ : 0, # ∗ → %& is called a prefix to x if ∗ # ≤ #& ∗ . = . ∀. ∈ 0, # ∗ Liveness properties ≡ a sequence property p with the property that for every finite (q*, x*), there is some , such that: +, ∈ * * * a) (q ,x ) is prefix to (q,x) b) (q,x) satisfies p e.g.: ”something good can eventually happen” for any sequence there is a good continuation.

CITEE 2012

A Hybrid automaton H is an 7-tuple > = *, , 1, CD=., CDE, ε, F (3) where * = -+ , … , +H 5 is a finite set of discrete state (control locations); =the continuous state; 1: * %& → %& is a vector field; CD=.⊂ * %& is the set of initial states; K CDE: * → 2J describe the invariants of the locations; L⊆ * * is the transition relations; and K F: L → 2J is the guard condition. The hybrid state of system H is given by +, ∈ * As previously mention in part I, continues subsystem represents the dynamical system in differential/difference equation can be enlarged to hybrid system by extending its continuous dynamics as (1) autonomous switching of dynamics; (2) autonomous state jumps; (3) controlled switching of the dynamics; and (4) controlled state jumps. Continuous dynamics extended is required to define the guard property (G) in Hybrid Automaton H.

B. Dummy State Dummy State at first is found in Finite State Machine (FSM) Design[23], Modulo 6 Counter, for example. The unused states, 110 and 111, included in FSM design to avoid the Modulo 6 Counter enters illegal states. Here, we define the unused state as dummy state, because they are not main states in Modulo 6 Counter. Dummies state are included in the design to avoid the possible fault take place during its operation. The 110 and 111 are set to not to jump to next state if the event is 0. They goes to 000 if the event is 1, therefore it works on the correct sequence.

4) Ensemble Properties If we are able to verify safety and liveness properties, we are able to verify any sequence property. Ensemble properties ≡ property of the whole family of solutions. e.g.: Stability (continuity with respect to initial conditions) is not a sequence property because by looking each solution (q, x) individually we cannot decide the system is stable.

Figure 2. Module 6 Counter

III. HYBRID AUTOMATA AND DUMMY STATE A. Hybrid Automata Dummy state in hybrid system that we propose can be considered as Hybrid Automaton H. For fault application, a hybrid automaton H is 7-tuple. We have not yet used reset-map property because it is not suitable for MLD implementation.

174

Figure 3. Module 6 Counter with dummies state (110,111)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

C. Dummy State in Hybrid System Model Hybrid system considers dynamical system is combination of several modes in dynamical system. Mode in Hybrid System is equivalent to state in FSM. Mode in Hybrid system represents the system behavior during period of time observed. The system behavior is defined by differential or difference equation. The Jumping between modes is also equivalent to state transition in FSM. We managed to implement dummy state in FSM design to Hybrid System is to avoid the possible faults in Hybrid System.

ISSN: 2088-6578

known in advance and can be modeled by Fault Detection Identification (FDI) technique. In general, the system behavior in faulty mode can be identified using input-output data during previous fault in hybrid system. This method is nearly equivalent to identification process of system. y = f (x f ,u f )

uf

x& f = x f (u, t )

We considered there are two types of fault happened in hybrid system model. Figure 6. Identification system in faulty mode

1) Type I Init ∈ (q0 , x0 )

G (q o , q1 ) G (q1 , q 0 )

q0

q1

γ 0d

λ0 d

x& = f (q o , x) x ∈ Inv(q 0 )

qd

γ 0d

λ1d

x& = f (q d , x)

G (q 0 , q n )

x& = f (q1 , x) x ∈ Inv( q1 ) G (q1 , q n )

x ∈ Inv(q d )

γ nd

λnd

G (q n , q1 )

qn

x& = f (q n , x)

G (q n , q1 )

x ∈ Inv(q n )

Figure 4. Fault in every modes of Hybrid system

In Type I, we consider that fault can happened in every mode of hybrid system, because of incomplete information about the system. The parameters with subscribe in Figure 4., M N and λ N , is considered as guard parameter to make transition occurred from normally mode to faulty mode, and vice versa. 2) Type II Init ∈ (q0 , x0 )

G (q1 , q0 ) q1

γ 1d

x& = f (q o , x) x ∈ Inv (q 0 )

qd

x& = f (q d , x)

G (q 0 , q n )

λ1d

The residual signal has been implemented in Hybrid Observer Design to fault detection system [17]. However, in [15] state that “For hybrid system, inconsistently detected by residual is not necessarily an indication of a fault but may imply a mismatched mode between the system and the model. Mode tracking is essential for health monitoring a hybrid system”. ARR define relation between consistency (c) and threshold as:

1, r (t) < threshold ARR i _c =  i 0, ri (t) ≥ threshold (4) where

ri (t ) = the residual associated with ARR

In fault diagnosis, Certain ARRs provide certain information and not sensitive to the fault. Meanwhile, Uncertain ARRs provide uncertain information and sensitive to the fault.

G (q o , q1 )

q0

Lately, the global ARR (GARRs) has been proposed in [15] to improved ARRs (Analytical Redundancy Relations) as part of FDI technique. ARRs method is based on threshold detection of residual in signal generator technique.

x& = f (q1 , x ) x ∈ Inv(q1 )

The ARRs can be illustrated as follow G (q1 , q n )

x ∈ Inv( q d )

qn

G (q n , q1 )

x& = f ( q n , x )

G (q n , q1 )

x ∈ Inv (q n )

Figure 5. Fault in certain mode of Hybrid system

In Type II, the fault considered happened in a certain mode of hybrid system. D. System Behavior Construction in Faulty Mode There two types to fault known [15] in hybrid system, i.e.: (1) parametric faults, where one or more model parameter are deviating from their nominal value to uncertain value;(2) fault mode, where the faulty state is

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 7. ARR Contribution. Source: [15]

175

ISSN: 2088-6578

Yogyakarta, 12 July 2012

IV. DISCUSSION AND FUTURE WORKS

A. Discussion In the paper, we presented models that have been developed in hybrid system area. We also told some aspects from control theory and computer science perspectives needed to improve a model for hybrid system. Mainly we propose set of dummies state in part III to hybrid system model as important part of system. We consider that inserting the dummy state is one way of solutions to make hybrid system reliable during a normal and at a presence of faults.

[7]

[8]

[9]

[10]

[11]

B. Future Works We manage to formalize a novel model based on Hybrid Automata (HA) perspective with dummy state for dynamical system considered as type I and type II . ARR model based on Bond Graph (BG) offers another opportunity to the dummy state representation. This paper is part of my work to find out “The improvement Technique in Hybrid Observer Design to diagnose the faults in Hybrid System”.

[12]

[13] [14] [15]

[16]

ACKNOWLEDGMENT The writer would like to say thanks to Directorate General of Higher Education (DGHE) – The Ministry of Education and Cultural – Republic of Indonesia (RI) because of the 2011 BPPS scholarship for continuing my study in School of Electrical Engineering and Informatics (STEI-ITB), Bandung Institute of Technology, Indonesia.

[17]

[18]

[19]

REFERENCES [1] [2]

[3] [4] [5]

[6]

176

J. Lunze, F. L. Lagarrigue, Handbook of Hybrid System Control – Theory, Tools, Application, Cambridge University Press, 2009. T.A. Herzinger,”The theory of Hybrid Automata”, 11th Annual IEEE Symposium on Logic in Computer Science, pp. 278 – 292, July 1996. Z. Sun, S.S. Ge. Switched linear system, Springer-Verlag, Berlin, 2005. R. Alur, T. A. Herzinger, A theory of time automata, Theoritical of Computer Science, pp. 183-235, 1994. C. Xia,” Translation methods from timed automata to time petri nets”, IEEE Int. Conf. on Computational Intelligence and Software Engineering, pp. 1-6, December 2009. C.G. Cassandras, S. Lafortune. Introduction to Discrete Event System, 2nd edition, Springer, 2008.

[20]

[21]

[22]

[23]

CITEE 2012

C. Chase, J. Serrano, and P.j. Ramadge, “Periodicity and chaos from Switched Flow System: Contrasting example of Discretely Controlled System”, IEEE Trans. Automatic Control, vol. 38(1), pp. 70-83, 1993. A. Bemporad, M. Morari, ”Control of System Integrating Logic, Dynamic, and Constraints”, Automatica, vol. 35(3), pp. 407-427, 1999. A. J. vatarn der Schaft, J. M. Schumacher, “Complementarity Modelling of Hybrid System”, IEEE Trans. Automatic Control, vol. 43(4), pp. 483-490, 1990. J. P. Aubin, J. Lygeros, M. Quincampoix, S.Sastry, and N. Seube, “Impulse Differenetial Inclussion: a Viability Approach to Hybrid System”, IEEE Trans. Automatic Control, vol. 47(1): pp. 2-20, 2002. W. P. M. H. Heemels, B. De Schutter, and A. Bemporad, “Equivalence of Hybrid Dynamical Model”, Automatica, vol. 37(7), pp.1085-1091, 2001. X. Wang, J. Guo, “The method of system reliability modeling based on Hybrid theory”, 2011 9th IEEE Int. Conf. on Reliability, Maintainability and Safety (ICRMS), pp. 199 – 207, June 2011. T. Krilavicius, Hybrid technique for hybrid system, Ph.D Thesis, The University of Twente, 2006, published. K. Ogata, Modern Control Engineering, 5th Edition, Prentice Hall, 2010. ISBN 13: 978-0-13-615673-4 S.A. Arogeti, D. Wang, and C.B. Low, “Mode idenfication of hybrid system in the present of fault”, IEEE Trans. on Industrial Electronics, vol. 57, No. 4, pp. 1452-1467, April 2010. Z. Yang, “A sufficient and necessary condition for the controllability of linear hybrid systems”, IEEE Int. Symposium on Intelligent Control, October 2003. A. Balluchi, L. Benvenuti, M.D. Di Benedetto, A. L. SangiovanniVincentelli,”Observability for hybrid system”, 42nd IEEE Conf. on Decision and Control, pp. 1159-1164, December 2003. Sumadi, Kuspriyanto, I. Munawar, “Observability for Linear Hybrid System”, National Conference on SATEK IV, University of Lampung, pp. 933-936, November 2011. J. Cortez, ”Discontinuous Dynamical system: a tutorial on solutions, Nonsmooth analysis, and stability”, IEEE Control Systems Magazine, pp. 36-73, June 2008. R. Goebel, R. G. Sanfelice, and A. R. Teel, “ Hybrid Dynamical System: Robust stability and control for Systems that combine Continuous-time and Discrete-time dynamics”, IEEE Control Systems Magazine, pp. 28-93, April 2009. J. P. Hespansha, Hybrid Control and Switched Systems: Lecture 5, Properties of hybrid systems, Lecture Note, University of California, 2005. T. Stauner, Formal Method in System Design: Properties of Hybrid System- A Computer Science Perspective, vol. 24, Kluwer Academic , pp. 223-259, 2004. J.D. Carpinelli, Computer Systems Organization & Architecture, Addsison-Wesley, 2000, ISBN: 0-201-61253-4 -o-

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

FUZZY C-MEANS ALGORITHM FOR ADAPTIVE THRESHOLD ON ALPHA MATTING R. Suko Basuki1)

Moch. Hariadi2)

Dept. of Computer Science Dian Nuswantoro University Semarang, Indonesia

Dept. of Electrical Engineering, Tenth of November Institute of Technology Surabaya, Indonesia [email protected]

[email protected]

Abstract— Image matting is a process of extracting the foreground object from an image that plays an important role in image editing. In this paper, FCM and Otsu algorithm, is used to generate the threshold value as an input of the alpha value. Mean Square Error (MSE) is used to measure the performance of both algorithms. The eexperimental results shows that the FCM algorithm produces a smaller threshold value so that the number of the error pixels was less than the Otsu algorithm. Keywords-component; Adaptive Threshold;

Fuzzy

C-Means;

Alpha

Matting;

I. INTRODUCTION Extraction the foreground object from the whole images is important in editing images. Separation accuracy of the foreground object from background is determined by part or all of an image pixel that is called pulling matte or digital matting. In order to formulate a digital matting method, input image (I) is assumed as a combination of a foreground image (F) and a background image (B). The color of pixel i-th, is assumed as a linear combination that correspondent of a foreground and background colors [1], where  is the pixel’s opacity component used to blend linearly between foreground and background. (1) Several methods are used in recent years, giving trimap as a starting point. Trimap is a rough drawing (usually hand-drawn) of the image input segmented into three regions; foreground (drawn in white), background (drawn in black) and unknown region (drawn in gray). Those three regions are typically used to solve the F and B simultaneously by iterative nonlinear optimization, which alternately is done by estimating F and B with α. In order to generate a good result of the unknown regions of the trimap should be made as small as possible. However, the weakness of this approach is difficulty in handling the images that have a pixel mixture or the foreground that has many holes [2]. Extracting the alpha matte from a natural image is used "closed-form" which is closely related to the colorization method [3]. The cost function is obtained from the assumption of local smoothness in the F and B, which are possible to be eliminated so that generates the quadratic cost function in α.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

R. Anggi Pramunendar 3) Dept. of Computer Science Dian Nuswantoro University Semarang, Indonesia [email protected]

The global optimum is used to this cost function to generate alpha matte that can be obtained by solving a sparse linear system. In order to differentiate F and B (see in fig. 6.b), scribble color (white for foreground and black for background) is used to calculate the alpha value of the closed-form method. This differentiation will be examined by eigenvectors of a sparse matrix that has close relationship with matrices used in spectral image segmentation algorithms. This approach also provides solid theory in instance analysis that provides valuable hints to recognize where the image scribbles should be placed. As an added value of the previous research, this paper proposed an alpha value generated by the adaptive threshold using Fuzzy C-Means (FCM ) algorithm for pulling matte. II. RELATED WORK The existance of image matting has a goal to solve compositing (eq. 1) for the unknown pixels. Most methods of image matting use trimap [4], [5], [6], [7], [8] as a companion to the image input for labeling foreground, background, or unknown pixels. This method usually carries out by utilizing some local regularity assumptions on F and B to estimate the value of each pixel in the unknown region. According to Bremen at al (2000) state that Knockout algorithm is segmented between F and B, following step is estimating the color probability of F and B into unknown region. The point is put in the unknown region, foreground F is counted from the pixel in perimeter of known foreground. The weight of the nearest pixel that known is set 1, and then this weight decreases directly proportional to the distance, for reaching the weight 0 is needed twice as distant as the nearest pixel. The same procedure is used to the beginning of calculating background based on the pixel from nearby known background. The assumption of some algorithms [7], [8] said that local foreground and background are come from the simple relatively color distribution. The Bayesian matting algorithm is the successful factor for Knockout Algorithm, where a mixture of oriented Gaussians is used to learn the local distribution and then α, F, and B are estimated as the most probably ones given in that distribution. Meanwhile, in the Poisson matting method is performed by optimizing the pixel alpha, foreground as well as background colors statistically. To reduce errors that caused

177

ISSN: 2088-6578

Yogyakarta, 12 July 2012

by misclassification of the complex color displaye in color sample [5], matte operations performed on the gradient directly. A smooth change intensity of the F and B are the basis for the formulation in Poisson Matting. Sun et al used global Poison matting as semi-automatic approach to estimate matte from an image gradient that given a user-supplied trimap. Robust calculation of the foreground and background has been performed by matting failure caused by a complex background that often cannot be solved by global Poisson matting. This robust calculation is also used to solve local Poisson matting, which manipulates a continuous gradient field in a local region. Several approaches were successfully used to extract the foreground from background object that have been proposed [9], [10], [11]. Several approaches have been performed to translate user-defined simple constraints (such as scribble or a bounding rectangle) to a min-cut problem. The completion of min-cut problem using binary segmentation which is then transformed into a trimap by erosion, however, the results are still fuzzy. Border matting [10] used a parametric alpha that put in a narrow strip around the hard boundary, cannot be performed for the similar cases to the hairs object, since the extent of the fuzzy region cannot be performed in this manner. The proposed method by [3] and [12] used boundary scribble for all images by reducing a quadratic cost function so that better suit for matting problem. Wang and Cohen [13] propose a scribble-based interface for interactive matting. The scribble is used as parameter a foreground and background pixel. This approach has produced some impressive results, however the approach has expensive process. Guan et al [13] proposed another scribble-based matting approach by adding the random walk approach [5] with an iterative estimation of color models. III. DESIGN OF ALGORTIHM Fig 1. shows the framework of the proposed image matting.

Image Input

Image Matting

CITEE 2012

A. FCM (Fuzzy C-Means) Algorithm FCM algorithm proposed by Bezdek in 1973 is the clustering methods of the "hard c-means" algorithm then improved by [15], and defined as follow. Let X={x1,x2, ...... xn}, where X is data set, and xi includes the number of s attributes. Fuzzy clustering of X divided into classes (2 ≤ c ≤ n), and V = {v1, v2, ..., vc} is the c centers. Respectively the sample cannot be strictly divided into certain types, however a certain degree of membership can be a member of the other categories. the criteria of the FCM clustering is defined as the objective function as shown in (equation . 2).

(2)

J (U, V) denotes the various types of the sample by the number of the cluster centers weighted is a weighted distance squared; (uik) shows the membership category of the i sample to xk. (3) where dik is Euclidean Distance of k sequences to the center I; m is a weighted index which is an important parameter in regulating the degree of fuzzy clustering, and m  (1, ∞).

(4) cluster center V defined as

Convert to grayscale

Determine threshold

(5) Threshold value obtained from the average of the maximum in the class with the smallest center and minimum at the class with the middle center, where the cluster pixel values using a 3-class concept in the FCM [16]. Fig. 2. Shows the thresholds comparison by FCM and Otsu .

Performance Measure

Fig. 1.Proposed Framework Overview In general, the content of the image covers foreground, background, boundary areas and noise. Gray images can be used as an important feature to distinguish between the contents so that the value of the gray pixels is usually chosen as the clustering feature [14].

178

Fig. 2. Threshold Results

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

B. Mean Square Error (MSE) MSE was used in order to measure the performance of image matting. Input images were comparing with the image after combined with the alpha parameter that obtained from threshold process. IV.

EXPERIMENTS AND RESULTS

The results show that the FCM algorithm was performed to generate a threshold value. The results of threshold values obtained from both algorithms were shown in the Table.1 and the fig. 3

ISSN: 2088-6578

Table 2.MSE of FCM and Otsu Algorithm Input Image

Mean Square Error FCM

OTSU

teddy.bmp

2841,424126

5669,397258

hair.bmp

1689,305723

2697,183302

bird.bmp

1785,980596

3751,389982

horse.bmp

2487,232621

5015,051846

2043,379

5055,08185

lion.bmp

Table 1. Threshold Result Input Image

Threshold FCM

OTSU

teddy.bmp

0,381355932

0,539215686

hair.bmp

0,335294118

0,423529412

bird.bmp

0,32745098

0,474509804

horse.bmp

0,331372549

0,470588235

lion.bmp

0,331372549

0,521568627

Fig. 4.Comparasion of Measurement Result In addition, this paper also measured the processing time to evaluate the performance of image matting. Comparison of the time elapsed between the FCM algorithm and Otsu algorithm is shown as in table 3 and fig. 5 below. Table 3. The Average Time Elapsed Results Input Image

The Average Time Elapsed FCM

OTSU

teddy.bmp

13,59544297

10,59543472

hair.bmp

15,85372108

11,24068757

Fig. 3. Threshold Comparison

bird.bmp

15,61203625

11,88594042

Next, threshold value was inserted into the alpha value and used for the purpose of matting [3], therefore the foreground object could be separated as illustrated in fig. 6.c. Furthermore, the foreground object that had been separated was blended with the input image as shown in fig. 6.e. This paper used MSE to measure the performance of image matting by comparing the input image as described in fig. 6.a, and the composite image as drawn in fig. 6.d. The measurement results were revealed in Table. 2 and fig. 4

horse.bmp

14,17363956

12,53119328

lion.bmp

15,02731435

13,17644613

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

179

ISSN: 2088-6578

Yogyakarta, 12 July 2012 [10].

[11].

[12].

[13].

[14].

[15]. [16].

CITEE 2012 C. Rother, V. Kolmogorov, A. Blake, “Grabcut : Interactive Foreground Extraction Using Iterated Graph Cuts,” ACM Transactions Graphics, Vol. 23, No. 3, August 2004, pp: 309314. Y. Boykov, M.P. Jolly, “Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images,” Proc. Eighth International Conference Computer Vision. 2001. A. Levin, D. Lischinski, Y. Weiss, “Colorization Using Optimization,” ACM Transactions Graphics, Vol. 23, No. 3, August 2004, pp: 689-694. Y. Guan, W. Chen, X. Liang, Z. Ding, and Q. Peng, “Easy Matting,” Proc. Ann. Conf. European Assoc. for Computer Graphics, 2006. C. Xiao Li, Z. Ying, S. Jun Tao, S. Ji Qing, “Method of Image Segmentation Based on Fuzzy C-Means Clustering Algorithm and Artificial Fish Swarm Algorithm,” Proc. IEEE Conference Intelligent Computing and Integrated System. 2010, pp:254-257. L. Bonian,”Fuzzy Mathematics and Its Applications,” Hefei University of Technology Press 2007, pp: 61 -67 G. Xiong, X. Zhou, L. Ji, “Automated Segmentation of Drosophila RNAi Fluorescence Cellular Images Using Deformable Models,” IEEE Transactions on Circuits and Systems, Vol. 15, No. 11, November 2006, pp: 2415-2424.

Fig. 5. Comparison of Processing Time V.

CONCLUSION

This paper was presented adaptive threshold value by using FCM for alpha matting and comparing with the Otsu algorithm . MSE was used to measure the performance of both algorithms. Experimental results showed that the FCM algorithm produced a smaller threshold value so that the number of the wrong pixels was less than the Otsu algorithm.

REFERENCES [1].

[2].

[3].

[4].

[5].

[6].

[7].

[8].

[9].

180

A. Levin, D. Lischinski, Y. Weiss, “A Closed-Form Solution to Natural Image Matting,” IEEE Transactions on Pattern Analysis And Machine Intelligence, Vol. 30, No. 2, February 2008, pp: 115. J. Wang, M. Cohen, “An Iterative Optimization Approach for Unified Image Segmentation and Matting,” Proc. 10th IEEE International Conference Computer Vision. 2005. L. Grady, T. Schiwietz, S. Aharon, R. Westermann, “Random Walks for Interactive Alpha-Matting,” Proc. Fifth IASTED International Conference Visualization, Imaging, and Image Processing. 2005. N.E. Apostoloff, A.W. Fitzgibbon, “Bayesian Video Matting Using Learnt Image Priors,” Proc. IEEE Conference Computer Vision and Pattern Recognition. 2004. J. Sun, J. Jia, C. Tang, H. Shum, “Poisson Matting. Image,” ACM Transactions on Graphics (TOG), Vol. 23, No. 3, August 2004, pp:315-321. Y.Y. Chuang, A. Agarwala, B. Curless, D.H. Salesin, R. Szeliski, “Video Matting of Complex Scenes,” ACM Transactions Graphics, Vol. 21, No. 3, July 2001, pp: 243-248. M.A. Ruzon, C. Tomasi, “Alpha Estimation in Natural Images,” Proc. IEEE Conference Computer Vision and Pattern Recognition. 2000. Y. Chuang, B. Curless, D.H. Salesin, R. Szeliski, “A Bayesian Approach to Digital Matting,” Proc. IEEE Conference Computer Vision and Patter Recognition. 2001 Y. Li, J. Sun, C. Tang, H. Shum, “Lazy Snapping,“ ACM Transactions Graphics, Vol. 23, No. 3, August 2004, pp: 303308.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Yogyakarta, 12 July 2012

ISSN: 2088-6578

181

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Analysis Of Performance Simulator Water Level Control At Fresh Water Tank In PLTU With Microcontroller M Denny Surindra Teknik Mesin Politeknik Negeri Semarang Jl. Prof. H. Sudarto, S.H., Tembalang, Semarang, 50329, Indonesia

[email protected] Abstract—Simulator designed to control the system using a microcontroller ATMega8535 and as screen media using computer, so that the water level can be monitored. The function of microcontroller is as a regulator of the performance of simulator, such as opening and closing the valve on minimum and maximum level. Simulator testing using a variety of openings valve as disorders leaked with turning 300 and 600. The results of simulator test fresh water tank system works well when the minimum water level, 1, 2, 3, and 4 and can be monitored using a computer in accordance with a pre-programmed level. If comparing of error system between computer with lamp indicator, so computer to a lower error system than the indicator lights. Keywords-water; level; control; microcontroller.

I.

INTRODUCTION

Control systems in the industrialized world increasingly lead to industrial automation. The purpose of automation is not only to increase productivity and reduce costs, but also in production quality and flexibility. With advances in technology are expanding rapidly in all fields, then the electronic equipment was created which has a higher speed, smaller dimensions, and has high reliability. There are several kinds of controls that can be used in various industries, such as PLC, op-amp, inverter, microcontroller, and others. From a variety of existing controls we choose a microcontroller as the reservoir water level control device because a microcontroller is cheaper, more compact, and more readily available. On the fresh water tank in the plant, the water must be as required by mixed-bed demine plant, so the water level maintained constant. If the water level in the fresh water tank is too high will cause the processing of fresh water in the mixed-bed is not in accordance with the standards because the dose for the treatment of rinse water is adjusted to the average flow would be excessive so that the mineral content of water is not produced in accordance with the standards. Meanwhile, when the water level in the fresh water tank is too low, the water produced from the water purification process in the mixed-bed will not be maximized or there are minerals in the water content to be fed to the boiler. To secure the fresh water tank mounted sensor for the minimum level and maximum level. At the minimum level of water hit the valve automatically opens and at the maximum level then the valve will close. Control of the fresh water tanks most still use manual systems. In this journal, fresh water tank is designing and implementing in the simulator water level control that can be monitored by a microcontroller-based.

182

Figure 1. Fresh water tank

II.

BASIC CONCEPTS

Simulation system on a fresh water tank with a technique of controlling and monitoring system is concentrated on a few basic parts. A. Water Level Sensor To find out the level of water in the fresh water tank required a water level sensor using a copper electrode length. The length of each electrode is different for every level that show water level has been determined. If the water level at the maximum conditions of the pump motor will soon off, and if the water level at a minimum, the pump motor will be on to maintained level water. B. Water Level Indicator For level indicator using LED that works when copper electrode touching the water surface which varies in length. C. Water Pump Controlling System The pump can be controlled by connecting the output pin microcontroller through control on / off. When microcontroller sends a positive signal (+5v) or ground signal (0v) to the motor driver circuit, the motor of pump work for start and stop working. D. Microcontroller AVR ATMega853 Microcontroller is the development of the microprocessor chip in the form of a large-scale VLSI (Very Large Scale IC). General features of the microprocessor and microcontroller is reprogrammable, meaning that the function of the microprocessor and microcontroller can be altered by changing the program, without changing the hardware. ATMega8535 has 64 bytes of register I/O that can be accessed as part of RAM

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

memory, or can also be accessed as I/O. As mentioned previously, if the register I/O access such as accessing data on the RAM memory register I/O address to register occupies the I/O, but if you register I/O access such as access I/O registers the I/O occupied memory address associated with the hardware I/O. E. Other equipment To control other equipment such as lights, solenoids, and motors with the microcontroller interface is needed between microcontroller pin with equipment that has a great power. Contactor that acts as a mechanical relay is used to drain current is controlled by a microcontroller that has only milliamps current. III.

ISSN: 2088-6578

Tests on ATMega8535 microcontroller circuit can be done by connecting it with the power supply circuit as a voltage source. Leg 40 is connected to the source voltage of 5 volts, while the leg 20 is connected to ground. Then the voltage measured at 40 feet using a voltmeter. From the test results obtained on the legs 40 voltage of 4,9 volts. The next step is to provide a simple program on the microcontroller ATMega8535.

RESEARCH METHODS

The first step in designing a water level control remedy is the use of a microcontroller to control the entire system that automatically reduces the complexity of design and control. Microcontroller get input level signal from the water level sensor. After variable input are processed, the output signal to decide whether to give the water pump on or off according to the status of the water level in the tank. Work system design flow chart shown in Figure 2. Start

Initial Serial

N

Empty tank? Y Pump On

Sensor Level 1

Lamp 1 on

Sensor Level 2

Lamp 2 on

Sensor Level 3

Lamp 3 on

Sensor Level 4

N

Lamp 4 on

Full Tank? Y Pump Off

Finish

Figure 2. Flow chart system

The series serves as a circuit diagram of the control center for the entire system. The main components of circuit are a microcontroller ATMega8535. In the IC is a program is loaded, so the circuit can be run in accordance with the desired. This microcontroller is a microprocessor chip and IC where there is a program memory (ROM) as well as a versatile memory (RAM), even some kind of microcontroller that has ADC facilities, PLL, EEPROM in one package. DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Figure 3. Design Circuit Diagram

IV.

RESULTS AND DISCUSSION

Fresh water tank simulators have the lower and upper reservoir, lower reservoir used to hold water as a source of water and to remove water from the reservoir after the water is used. Lower reservoir is filled with water prior to the specified limits. When the pump works on whistleblower must fill reservoir in a closed condition, when water began filling the reservoir above the electrodes have to read water level predetermined limits so that when the water touched one of the electrode signal lights will be lit in accordance with the current water level and if the water level touches the electrode at the maximum level in a matter of millisecond then turns off the pump. When a whistleblower opened the lights one by one will sign off on the water touching the electrode with water down the conditions to be discharged into the reservoir bottom, and at the electrode touching the water level 1 then the pump will work to fill the reservoir again.

183

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Water Level (second) Computer

Lamp

Figure 5. Error system with the valve closed

System is given in the form of valvular disorders which opened with rounds of 300. Figure 4. Fresh water tank simulator TABLE II.

From the water level sensor is then received by a microcontroller which then ruled microcontroller relay for on-off signal lights in accordance with the conditions governing the water level and pump to work or not through an electronic AC switch which is connected to the pump. Prior to the relay and electronic AC switch, a voltage difference of water level sensor is received by the microcontroller is set in the comparator so as to adjust the relay according to a predetermined water level. Microcontroller output of one panel are output to connect to a computer that previously had made an appearance for the program may be associated with a microcontroller, so that the display on a computer in accordance with the working water level control that is controlled using a microcontroller.

Water Level Level 1 Level 2 Level 3 Level 4

DATA ERROR SYSTEM BY OPENING VALVE 300 probe (second) 31,12 46,60 55,24 62,96

∆t (second) Lamp computer 0,58 0,09 0,49 0,05 0,25 0,08 0,22 0,06

Error (%) Lamp computer 1,88 0,30 1,06 0,23 0,46 0,15 0,36 0,10

At the time of filling the reservoir with an open valve 300, the system error indicator on is 1,88% level 1, level 2 is 1,51%, 1,39% are level 3 and level 4 of 0,36%. While the error display on a computer system at is 0,30% level 1, level 2 is 0,23%, 0,15% are level 3 and level 4 by 0,10%.

Simulator testing fresh water tank system the data obtained the average time it takes to fill the reservoir and the reservoir emptied. A. Filling Reservoir TABLE I. Water Level Level 1 Level 2 Level 3 Level 4

DATA ERROR SYSTEM WITH THE VALVE CLOSED probe (second) 21,13 24,72 24,91 25,12

∆t (second) Lamp computer 0,48 0,19 0,44 0,18 0,4 0,14 0,33 0,1

Error (%) Lamp computer 2,27 0,89 1,78 0,73 1,61 0,56 1,31 0,40

Testing the charging system error reservoir with a closed valve on the indicator light is 2,27% level 1, level 2 is 1,78%, 1,61% for level 3 and level 4 is 1,31%. While the error display on a computer system at level 1 is 0,89%, is 0,73% level 2, level 3 is 0,56% and 0,40% for level 4. Thus forming a graph of the time difference and time difference indicator lights on the display where the time taken to touch the water surface of the probe as a justification.

Lamp

Water Level (second) Computer

Figure 6. Error system opening the valve with 300

System is given in the form of valvular disorders which opened with rounds of 600. TABLE III. Water Level Level 1 Level 2 Level 3 Level 4

DATA ERROR SYSTEM BY OPENING VALVE 600 probe (second) 39,5 73,83 116,55 213,39

∆t (second) Lamp computer 1,19 1,13 0,87 0,54 0,6 0,41 0,78 0,51

Error (%) Lamp computer 3,01 2,87 1,18 0,73 0,51 0,35 0,36 0,24

At the time of filling the reservoir with an open valve 600, the error indicator light system on is 3,25% level 1, level 2 is 0,85%, 0,09% is level 3 and level 4 is 0,96%. While the error display on a computer system at is

184

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

3,2864% level 1, level 2 is 1,3448%, 0,37% is level 3 and level 4 is 1,5727%.

Lamp

ISSN: 2088-6578

At the time of leakage of the reservoir with the valve open 600, the system error indicator lamp is 2,866% level 4, level 3 is 1,242%, level 2, namely 1,089%, and 1 level of 0,98%. While the error display on a computer system at level 4 is 1,57%, 0,83% are level 3, level 2, namely 0,62%, and 1 level of 0,57%.

Water Level (second) Computer

Figure 7. Error system opening the valve with 600

B. Disclosure of Reservoir System is given in the form of valve disorders which opened with rounds of 300. TABLE IV. Water Level Level 4 Level 3 Level 2 Level 1

DATA ERROR SYSTEM BY OPENING VALVE 30 probe (second) 62,29 71,63 216,22 259,97

∆t (second) Lamp computer 0,64 0,50 0,59 0,34 0,95 0,57 0,49 0,27

0

Error (%) Lamp computer 1,02 0,81 0,81 0,47 0,44 0,26 0,18 0,10

The response of the indicator lights and the display has a time lapse of time with the probe touching the surface of each level. At the time of leakage of the reservoir with the valve open 300, system error indicator on is 1,02% level 4, level 3 is 0,81%, is 0,44% level 2 and level 1 of 0,18%. While the error display on a computer system at level 4 is 0,81%, 0,47% is level 3, level 2 is 0,26%, and 1 level of 0,10%.

Lamp

Water Level (second) Computer

Lamp

Figure 9. Error system opening the valve with 600

System is given in the form of valvular disorders which opened with rounds of 900 or full opened. TABLE VI. Water Level Level 4 Level 3 Level 2 Level 1

DATA ERROR SYSTEM BY OPENING VALVE 900 probe (second) 17,01 19,20 22,03 45,50

Water Level (second) Computer

Lamp

System is given in the form of valvular disorders which opened with rounds of 600. TABLE V. Water Level Level 4 Level 3 Level 2 Level 1

DATA ERROR SYSTEM BY OPENING VALVE 600 probe (second) 42,67 64,65 92,72 132,83

∆t (second) Lamp computer 1,22 0,67 0,80 0,53 1,01 0,57 1,30 0,76

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Error (%) Lamp computer 2,86 1,57 1,24 0,83 1,08 0,62 0,98 0,57

Error (%) Lamp computer 1,10 0,64 0,88 0,59 0,82 0,52 0,76 0,47

At the time of leakage of the reservoir with the valve open 900 or fully open, the system error level indicator lamp 4 is 1,10%, 0,88% are level 3, level 2, namely 0,82%, and 1 level of 0,76%. While the error display on a computer system at level 4 is 0,641%, level 3 is 0,599%, level 2 is 0,526% and 0,470% for level 1.

Water Level (second) Computer

Figure 8. Error system opening the valve with 300

∆t (second) Lamp computer 0,18 0,10 0,17 0,11 0,18 0,11 0,35 0,21

Figure 10. Error system opening the valve with 900

V.

CONCLUSION

After testing the system simulator fresh water tank and retrieval of data it can be concluded as follows: 1. Fresh water tank simulator systems can work well when the minimum water level measurements, level 1, level 2, level 3, and level 4 and can be monitored using a computer in accordance with a programmed. 2. Error on the computer system is lower compared to the system error indicator lights.

185

ISSN: 2088-6578

Yogyakarta, 12 July 2012

REFERENCES [1]

[2]

[3]

186

Agus Margiantono and Andi Kurniawan Nugroho, “Pengendalian Tinggi Muka Cairan Berbasis fuzzy (fuzzy Based Liquid Height Controlling)”, Fakultas Teknik, Universitas Semarang. Antoni Susiono, Handy Wicaksono, and Hany Ferdinando, “Aplikasi Scada System pada Miniatur Water Level Control”, Jurusan Teknik Elektro, Fakultas Teknologi, Universitas Kristen Petra Surabaya, 2006. Candra, “Pembuatan Level Kontrol Menggunakan Variasi Putaran Motor”, Prodi Konversi Energi, Jurusan Teknik Mesin, Politeknik Negeri Semarang, 2010.

[4]

[5]

[6]

[7]

CITEE 2012

Danang, “Pembuatan Alat Pengontrol Tekanan Dengan Pemrograman PLC”, Program Studi Konversi Energy, Jurusan Teknik Mesin, Politeknik Negeri Semarang, 2010. Eko Ari Bowo, Mohamad, “Mata Kuliah Mikrokontroler”, Program Studi D3 Ilmu Komputer, Jurusan Matematika dan Ilmu Pengetahuan Alam, Universitas Sebelas Maret Surakarta, 2009. Hidayat, Wahyu, “Sistem Pengontrol Level Ketinggian Air Dengan Menggunakan Tampilan Visual Basic Pada PC Berbasis Mikrokontroler AT89S52”, Prodi Teknik Elektronika, Jurusan Teknik Elektro, Politeknik Negeri Sriwijaya, 2010. S.M. Khaled Reza, Shah Ahsanuzzaman, and S.M. Mohsin Reza, “Microcontroller Based Automated Water Level Sensing and Controlling: Design and Implementation Issue”, Proceedings of the World on Engineering and Computer Science vol.1, WCECS, San Francisco, 2010.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

An FPGA Implementation of Automatic Censoring Algorithms for Radar Target Detection Imron Rosyadi Electrical Engineering Department, Jenderal Soedirman University Purbalingga, Jawa Tengah, Indonesia 53371 [email protected]

Abstract— We design and test a novel FPGA-based systemon-chip radar detector that uses ACOSD algorithms for detecting targets in clutter with a log-normal distribution. The detector is implemented on an FPGA Altera Stratix II using parallelism and pipelining. For a reference window of 16 cells, the processor works properly with a processing speed up of to 115.13 MHz and processing time of only 0.26 µs. This processor uses much less of the computational resources than DSP. Keywords: CFAR; FPGA; Radar; SoC

I.

INTRODUCTION

Radar is an important apparatus in both military and civil systems. It is an electromagnetic system that detects, locates, and recognizes target objects such as airplanes and missiles. Radar transmits electromagnetic signals and then receives echoes from target objects to get their locations or other information. The received signal in a radar system is always accompanied by noise and clutter such as echoes from the ground, sea, rain, birds, insects, chaff and atmospheric turbulence. These disturbances can cause serious performance degradation for radar systems by making them conclude these echoes to be targets (false alarms). To overcome this problem and make a good decision, the receiver in a radar system should achieve a constant false alarm rate (CFAR) and maximum probability of target detection. Modern radars usually make the detection decision automatically by using an adaptive threshold based on a CFAR architecture, where the threshold is determined dynamically based on the local background noise/clutter power, not by a brute-force constant value. Though the theoretical aspect of CFAR detection is well developed [1–4], its practical application is still behind. The computing requirements in radar signal processing for a high data rate and real-time consideration should be met by advanced technologies that use high parallel computing techniques associated with pipeline organization. In addition, because of the different types of clutter in a real environment [5], the radar signal processing system must be designed in a very flexible way with an easy interface. Due to all the above requirements, system-on-chip (SoC) architecture becomes an attractive solution for the real-time CFAR processor. In an SoC, all components of a computer such as the processor, memory, or other electronic systems are presented in a single integrated circuit, where they operate in a specified manner to build

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

the required system architecture. In addition to their flexibility, SoC designs consume less power and have a lower cost and higher reliability than multi-chip systems. Recent advances in field programmable gate array (FPGA) technology have made SoC fabrication much faster and easier than before. In this paper, we present the FPGA-based design and realization of a recently published CFAR detector [6], the automatic censored ordered statistics detector (ACOSD). ACOSD has two sub-algorithms: the Backward BACOSD and the Forward F-ACOSD. These algorithms have been developed to achieve reliable target detection in the presence of severe clutter modeled by a log-normal distribution. This distribution is characterized by very long tails and is, therefore, well suited to represent spiky behavior in high-resolution radars. The SoC architecture of the ACOSD is implemented on an Altera Stratix II EP2S60 FPGA chip. This paper is organized as follows. Section II introduces the basic principles of CFAR detection and the related research done on hardware realization of CFAR algorithms. Section III describes both the B/F-ACOSD algorithms. The SoC architecture for the two detectors is given in Section IV. Section V presents the simulation and realization of the system. In Section VI, conclusions and future research plans are presented. II.

ADAPTIVE CFAR DETECTION AND RELATED HARDWARE WORK

In the past, a target is detected when the output of the radar receiver crosses a predetermined fixed threshold set to achieve a specified probability of false alarm. If this threshold is too low, then more targets will be detected, but the number of false alarms will be high. Conversely, if the threshold is too high, then fewer targets will be detected and the number of false alarms will also be low. In modern radar systems, the threshold is set to reach a required CFAR. If the background noise, clutter, and interference are constant with time and space, then a fixed threshold can be chosen to provide a specified probability of false alarm. In natural conditions, unwanted clutter and interference sources affect the noise level and make it change spatially and also temporally. In this case, an adaptive threshold can be used, where the threshold is raised and lowered to maintain CFAR detection. A typical CFAR processor is shown in Fig. 1. The input signals are set serially in a shift register. The content of the cells, commonly called reference cells or the

187

ISSN: 2088-6578

Yogyakarta, 12 July 2012

reference window, surrounding the cell under test (X0) is processed using a CFAR processor to get the adaptive threshold T. Then, the value of X0 is compared with the threshold for making the decision. The cell under test X0 is declared as a target if its value exceeds the threshold value T. The first and simplest CFAR detector is the cellaveraging CFAR (CA-CFAR) proposed by Finn [7].

Figure 1. Block diagram of a typical CFAR algorithm.

In this detector, the adaptive threshold is obtained from the arithmetic mean of the reference cells. The CACFAR is optimal in the sense that it maximizes detection probability in a homogeneous background when the reference cells contain independent and identically distributed (IID) observations governed by an exponential distribution. However, the CA processor performance suffers significantly in a non-homogeneous background; the false alarm rate increases considerably at the clutter edges, and target masking occurs in multiple situations. Therefore, a different processing scheme, known as the ordered statistics (OS) CFAR processor, has been introduced to alleviate these problems to some degree [8,9]. By virtue of Fig. 1, OS-CFAR processors estimate the clutter power from one of the N ordered statistics of the observations in the reference window. The main idea behind the OS-CFAR processor is that by using a clutter power estimate that is a function of only one of the sorted observations in the reference window, the detector performance can be made robust in situations where CACFAR falls short due to its reliance on arithmetic averaging of the observations. Adaptive threshold techniques used in radar detection to maintain CFAR, when the total background noise/clutter level is unknown, are usually addressed under certain assumptions. For many years, clutter echoes in radars with low resolution capabilities were considered to have a Gaussian probability density function (PDF), which results in Rayleigh-distributed amplitude. Many algorithms have been proposed to regulate the false alarm in different signal processing scenarios when the clutter is Rayleigh distributed [10]. Adjusting the detection threshold in these algorithms involves estimating a single scale parameter; that is, the mean square value of the Rayleigh PDF or any related parameter. The detection loss due to the a priori uncertainty of the clutter power is obviously dependent on the efficiency of the adopted estimator. In several applications of radar detection, the clutter amplitude may not be Rayleigh distributed. This is true when working with high-resolution radar, low grazing

188

CITEE 2012

angles, and horizontal polarization at high frequencies. In particular, data collected by high-resolution radars [5,11] have confirmed the non-Rayleigh nature of certain land and sea clutter, which is often described by a multiparameter distribution to achieve a satisfactory fit to experimental data. Distributions most commonly adopted to model non-Rayleigh clutter include the log-normal, Weibull, and K distributions [12,13,14,15,16]. These distributions are characterized by a shape parameter to rule the actual behavior of their tails, in addition to a scale parameter that is related to the mean square value of the distribution. We note that the theoretical development of CFAR detection is not followed by hardware implementation. However, a few attempts at hardware implementation of CFAR processors have been reported in the literature. In particular, a parallel-pipelined hardware implementation of a CA-CFAR-based target detection system in a noisy environment was presented using the TMS320C6203 DSP and FPGA devices [17]. The processing time achieved for this implementation was about 420 ms using 32 reference cells with 8 guarded cells. Another example of OS-CFAR, using the Virtex-II-V2MB100 development kit, leads to an execution time of the detection algorithms within 0.48 ms (0.6 µs) for a data set of 800 samples but using only 16 reference cells [18]. These delays are not suitable for high-resolution detection requiring less than 0.5 µs per cell. Other work has presented a versatile hardware architecture for CFAR processors based on a linear insertion sorter that implements CA, OS, SO and GO variants of the CFAR algorithm [19,20]. Even if the proposed architecture was configurable within one clock cycle to switch between all variants, no information related to the real time aspect was presented. Other implementations of CA-CFAR and OS-CFAR using parallel structure have been presented using DSP and FPGA [21], but the execution delays are not optimized. As mentioned, many parts of the target detector are implemented using DSP and hardware technologies, but the performance of new CFAR techniques in the presence of noise with a log-normal distribution operating in realtime is unknown to date. Alsuwailem et al. [22] implemented an automatic censoring CFAR detectors called TM-CFAR and Automatic Censored Cell Averaging (ACCA) ODV CFAR. However, the implementation does not consider the real-time aspects where an offline validation is done without allowing interaction with the architectural environment. Furthermore, no standard interface is given to facilitate communication with the radar system environment. Winkler et al. [23] used SoC with a reconfigurable processor inside for an automotive radar sensor. The processor is responsible for controlling the custom logic and IO tasks. Automatic censoring detectors called ACOSD CFAR have been recently introduced by Almarshad et al. [6]. The algorithm achieves automatic censoring of an unknown number of interfering targets and performs adaptive thresholding for actual target detection in lognormal clutter. Because of an increase in radar resolution, the log-normal distribution becomes more reliable than the Rayleigh distribution for representing the amplitude of

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

clutter. Meanwhile, the automatic censoring algorithms developed for Rayleigh clutter presented in [24] cannot straightforwardly be extended to the case where clutter samples are drawn from a log-normal distribution. Recently, it has been reported that hardware implementation of the Forward A-COSD target detector over the Stratix II development board can operate up to 115 Mhz with a delay of 0.29 µs for a log-normal distribution [25]. Therefore, it is necessary to investigate the performance of the system after including the appropriate interface delays related to interfaces and wrappers. In the rest of this paper, we will consider a new implementation of the ACOSD-CFAR proposed in [6,26], algorithms for SoC implementation using the target detector as a hard IP and NIOS II core processors embedded within the FPGA to facilitate the interfacing of the hard IP with the external devices. We consider the ACOSD and its real-time implementation. III.

ACOSD ALGORITHMS

The ACOSD and its sub-algorithms consist of two steps: removing the interfering reference cells (censoring step) and the actual detection step. Both steps are performed dynamically by using a suitable set of ranked cells to estimate the unknown background level and set the adaptive thresholds accordingly. This detector does not require any prior information about the clutter parameters, nor does it require the number of interfering targets.

ISSN: 2088-6578

to determine whether it corresponds to an interfering target or a clutter sample without interference. At the (k+1)th step, the sample X(N–k) is compared with the threshold Tck and a decision is made according to the test,

(4) where: Tck=X(1)

1-αk

X(p)

αk

(5)

Hypothesis H1 represents the case where X(N – k), and thus the subsequent samples X(N – k + 1), X(N – k + 2), … , X(N) correspond to clutter samples with interference, while H0 denotes the case where X(N – k) is a clutter sample without interference. The successive tests are repeated as long as the hypothesis H1 is declared true. The algorithm stops when the cell under investigation is declared homogeneous (clutter sample only) or, in the extreme case, when all the N – p highest cells are tested; that is, k = N – p. Fig. 2 shows the block diagram of the BACOSD algorithm.

In a CFAR processor, the input samples {Xi: i=0,1, …,N} are stored in a tapped delay line. The cell with the subscript i=0 is the cell under test that contains the signal to be determined as an actual target or not. The last N surrounding cells are the auxiliary cells used to construct the CFAR procedure. In the ACOSD, the N surrounding cells are ranked in ascending order according to their magnitudes to yield X(1)≤ X(2) ≤ X(1)≤ ...≤ X(p)≤... ≤. X(N)

(1)

The sorted cells are then sent to the detection component, which has two different algorithms: Backward ACOSD (BACOSD) and Forward ACOSD (FACOSD). A. The B-ACOSD Algorithm In the B-ACOSD algorithm, sample X(N) is compared with the adaptive threshold Tc0 defined as: Tc0=X(1)1-α0X(p)α0

(2)

th

where X(p) is the p largest sample and α0 is a constant chosen to achieve the desired probability of false censoring (Pfc). It is found that values of P>N/2 yield reasonable good performance in detection [6]. If X(N)Tco, the algorithm decides that the sample X(N) is a return echo from an interfering target. In this case, X(N) is censored and the algorithm proceeds to compare the sample X(N–1) with the threshold Tc1=X(1)1-α1X(p)α1

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

(3)

Figure 2. Block diagram of the B-ACOSD algorithm.

In a detection step, the cell under test X0 is compared with the threshold Tak to decide whether a target is present or not according to:

(6) Hypothesis H1 denotes the presence of a target in the test cell, while hypothesis H0 denotes the absence of a target. In B-ACOSD CFAR, the threshold Tak is defined as Tak=X(1)1-βkX(N-k) βk

(7)

189

ISSN: 2088-6578

Yogyakarta, 12 July 2012

where the value of βk is selected according to the design probability of false alarm (Pfα) for k interfering targets found in the censoring step. B. The F-ACOSD Algorithm The F-ACOSD algorithm starts by comparing sample X(p +1) with the threshold Tc0 given by Tc0=X(1)1-α0X(p)α0

(8)

where α0 is a constant chosen to achieve the desired (Pfc) for the F-ACOSD algorithm. In contrast with the BACOSD algorithm, if X(p+1) > Tc0, the algorithm decides that X(p+1) is a return echo from an interfering target, and it terminates. If, on the other hand, X(p+1) < Tc0 , the algorithm decides that the sample X(p+1) corresponds to a clutter sample without interference. In this case, the algorithm compares the sample X(p+2) with the threshold given by Tc1=X(1)1-α1X(p)α1

(9)

to determine whether it corresponds to an interfering target or a clutter sample without interference.

CITEE 2012

repeated as long as the hypothesis H0 is declared true. The algorithm stops when the cell under investigation is declared nonhomogeneous (i.e., clutter plus interference sample) or, in the extreme case, when all the N – p highest cells are tested; that is, k = N – p. Fig. 3 shows the block diagram of the F-ACOSD algorithm. In detection step, the cell under test X0 is compared with the threshold Tc0 to decide whether a target is present or not according to:

(12) Hypothesis H1 denotes the presence of a target in the test cell, while hypothesis H0 denotes the absence of a target. In F-ACOSD CFAR, the threshold is defined as Tak=X(1)1-βkX(p+k) βk

(13)

where the value of βk is selected according to the design probability of false alarm (Pfα) for k interfering targets found in the censoring step. C. Threshold Values The threshold selection is a key element in ACOSD algorithms [6]. These thresholds should be selected to reach low probability of hypothesis test error in a homogeneous environment. We used a Monte Carlo simulation with 500,000 independent runs to obtain the threshold values by maintaining desired values of Pfα and Pfc. Table I gives the threshold parameters αk and βk obtained using the B-ACOSD with Pfα = 0.001 and Pfc = 0.01. Table II shows values of αk and βk for the F-ACOSD with the same probabilities. TABLE I.

(N,p) (16,12) (36,24) (N,p) Figure 3. Block diagram of the F-ACOSD algorithm.

At the (k+1)th step, the sample X(p+k+1) is compared with the threshold Tc0 and a decision is made according to the test,

(16,12) (36,24)

TABLE II.

where

Tck=X(1)

1-αk

X(p+k)

αk

(10)

(N,p)

(11)

(16,12)

Hypothesis H1 represents the case where X(p+k+1), and thus the subsequent samples X(p+k+2), X(p+k+3), …, X(N),correspond to clutter samples with interference, while H0 denotes the case where X(p+k+1) is a clutter sample without interference. The successive tests are

190

(36,24) (N,p) (16,12)

THRESHOLD PARAMETERS FOR B-ACOSD (PFΑ = 0.001 AND PFC = 0.01). 1 2.59

2 2.03

3 1.70

k 4 1.44

5 -

6 -

7 -

1.63

1.88

2.12

2.37

2.64

-

-

2.53

2.15

1.95

1.812

1.7

1.60

1.52

1.35

1.46

1.56

1.65 k

1.73

1.8

1.87

8 -

9 -

10 -

11 -

12 -

13 -

-

-

-

-

-

-

1.44

1.36

1.3

1.22

1.15

-

1.94

2.02

2.1

2.18

2.26

2.34

THRESHOLD PARAMETERS FOR F-ACOSD (PFΑ = 0.001 AND PFC = 0.01). 1 1.44

2 1.46

3 1.53

k 4 1.74

5 -

6 -

7 -

2.64

2.37

2.12

1.88

1.63

-

-

1.15

1.15

1.15

1.15

1.16

1.16

1.17

2.34

2.26

2.18

2.1 k

2.02

1.94

1.87

8

9

10

11

12

13

-

-

-

-

-

-

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

(36,24)

IV.

-

-

-

-

-

-

1.19

1.21

1.26

1.31

1.46

-

1.8

1.73

1.65

1.56

1.46

1.35

ACOSD SYSTEM ARCHITECTURE

A. Design Flow The design flow for the system architecture development consists of eight steps, starting from a Monte Carlo simulation using Matlab® to software development using NIOS-II IDE®. The Monte Carlo simulations define the thresholds for some values of N and p. The B-ACOSD and F-ACOSD CFAR algorithms for specific values of N and p are executed under a Matlab® fixed-point simulation to improve performance and maintain accuracy when implemented in a digital system. HDL coding and test-bench simulation are done with ModelSim® from Mentor Graphics. Integration of all components in an SoC is done using SOPC-Builder®. The software provides useful and easy tools to connect many logic elements ranging from processor, memory, to user-logic elements. We used Quartus II® from Altera for synthesis, timing simulation and downloading the design into the FPGA chip. The driver and software needed by the system were coded, compiled, and downloaded using NIOS-II IDE®. We used C-language at this stage. B. System Overview The overall SoC design consists of five main modules: processor, on-chip ROM, ACOSD CFAR detector, input/ROM interface, and output/RAM interface, as shown in Fig. 4. All of the blocks are connected by bus interface. The processor mastered all communications between the hardware.

ISSN: 2088-6578

stored first in a 16-MB flash ROM provided by a development board and connected to the system through a flash ROM driver. The censoring results are stored in a 2MB external SSRAM controlled by an SSRAM driver. In terms of hardware, computation of the exponential equations (5) and (7) is hard and at high cost, especially when performed using floating-point format. To reduce the hardware complexity and computational time, equations (5) and (7) are converted, respectively, into logarithmic forms as follows: log Tck=(1-αk).logX(1)+αk.logX(p)

(14)

log Tak=(1-βk).logX(1)+βk.logX(N-k)

(15)

In this form, power computation becomes a simple multiplication, and the multiplication becomes an addition. Because logarithmic computation in hardware is a complex and slow task, the logarithmic computation is simplified using a look-up table. The look-up table contains a range of a log-normal distribution with μ = 1 and σ = 1.1, as suggested in [6], based on real radar input data. Following the number representation change to logarithmic form, the test cell value was also converted accordingly. This logarithmic conversion was performed using the same look-up table mentioned above. The look-up table resides on a 32K on-chip ROM inside an FPGA. The data distribution resolution in the 32K on-chip ROM is 0.0610. MATLAB fixed-point ACOSD simulation with this resolution gives censoring results as good as its real-number simulation. All of the above modules along with the user-defined ACOSD censoring were integrated using SOPC-Builder®. The Avalon switch fabric interconnects all of these peripherals in master/slave configuration. In this design, a single Nios II processor acts as a master and the other peripherals include the ACOSD IP core as the slaves. The addressing configuration can be done by SOPC-Builder® either manually or automatically. C. General ACOSD Architecture Organization The CFAR detector comprises five main modules: I/O interface, shift register, sorting module, censoring module, and comparator as shown in Fig. 5. The shift register consists of N reference cells, G guard cells, and the test cell X0 . The test cell is surrounded symmetrically by its reference cells and guard cells. The length of the shift register is given as: L=N+G+1

Figure 4. Block diagram of the system-on-chip.

The SoC uses a Nios II soft-core as the processor. Nios II is a 32-bit embedded-processor architecture designed specifically for the Altera family of FPGAs. Nios II has key features such as custom instructions and easy custom peripherals management. Nios II is offered in 3 different configurations: Nios II/f (fast), Nios II/s (standard), and Nios II/e (economy). We use Nios II/s core because it is suitable to maintain a balance between performance and cost. Input signal from an envelope detector can be sent directly to the Avalon bus through an input interface or

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

(16)

In a register with length L, each datum streamed serially into the input will need (L + 1) /2 clocks to be a cell test. The shift register contents, the N reference cells, are sent in parallel manner to the sorting module. The test cell from the shift register is delayed for some clocks until the associated threshold value is computed. After the N reference cells get sorted, some of them are subjected to an automatic censoring mechanism. Both censoring algorithms of the B-ACOSD and F-ACOSD reside in the censoring module. These algorithms will define the threshold value needed to decide whether the cell under test is a target or not. The decision is made by comparing

191

ISSN: 2088-6578

Yogyakarta, 12 July 2012

the values of threshold and the cell under test where the censoring output can be obtained either from the BACOSD or F-ACOSD, or both of them.

CITEE 2012

/2. The advantage of this circuit is twofold: The logic size is extremely small and the maximum value can be found with only N stages [27]. Furthermore, the architecture is fully pipelined; after each clock cycle, the cells get sorted when all stages are occupied. E. B-ACOSD Censoring module The B-ACOSD algorithm is described as follow. From the sorted reference cells, the number of interfering cells should be decided first by comparing Tck with X(N – k) for k < (N – p). The number of interfering targets will define the threshold value Tαk to be compared with the cell under test X0.

Figure 5. ACOSD-CFAR detector architecture.

D. Sorting module The sorting algorithm is the operation that puts elements of a list in a certain order. This operation plays a crucial role since it consumes long computation time, and constitutes a bottleneck for the real-time processing of the target detector. In this work, the sorting circuit is based on a parallel bubble-sort that is implemented over an array of compare-swap units.

Since there is no dependency between the input/output data of Tck and Tαk, both computations are performed simultaneously in a parallel way, as shown in Fig. 6. The Tck value depends on X(1), X(p), and αk, while Tαk depends on X(1), X(N – k), and βk . This parallelism technique will accelerate the computational time, but affect the logical elements required since all values of Tck and Tαk should be computed for all possible numbers of interfering targets. Tck and Tαk in the censoring module are computed in logarithmic form based on (14) and (15). More detailed representation of the censoring block is shown in Fig. 7. Equations (14) and (15) need a multiplication stage followed by an addition stage. Multiplication and addition for all values of k are performed in a parallel way. For N = 16 and p = 12, the censoring module needs 23 multipliers and 9 adders in parallel. To maintain pipelining strategy in this parallel process, some delay blocks are added to the module.

Figure 6. Parallelism in the B-ACOSD architecture.

Figure 7. Block diagram of B-ACOSD censoring algorithm.

The bubble-sorting algorithm compares every two elements, and then decides which one is the greater. The compare-swap circuit task is to compare the two input elements: If the first element is greater than the second, the two elements are swapped, and no change is performed otherwise. This operation is repeated for each pair of adjacent elements till the end of the entire data array. The process can be performed simultaneously for every adjacent pair, so as to speed up the processing time through what is called parallel bubble sorting.

Given Tck for all possible values of k, the censoring module uses a masking method to determine the number of interfering targets. All the threshold values of Tck are fed to N – p parallel comparators. The output of these comparators is a binary word of length N – p. This binary code is applied to a mask circuit, which determines the number of interfering targets. For example, if N = 16 and p = 12, the values of Tc0, Tc1, Tc2, and Tc3 are compared with X(16), X(15), X(14), and X(13) to get binary decisions d0, d1, d2, and d3, respectively. If X(N – k) is greater than Tck, then dk is logic 1, otherwise it is logic 0. The binary decision d (d = d0d1d2d3) decides which

Note that if the number of elements to be sorted is N, the number of the stages will be N; hence, the number of the compare-swap circuits units will be given by N(N – 1)

192

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

threshold Tαk needs to be compared with the cell under test X0. The masking look-up table is shown in Table III.

ISSN: 2088-6578

are compared with X(13), X(14), X(15), and X(16) to get the binary decisions d0, d1, d2, and d3, respectively.

The threshold (Tαk) that results from the masking process is finally compared with the cell X0. In this single comparator, if the value of X0 is greater than the threshold Tαk, then the CFAR processor output is 1, which means a target is present, otherwise the output is 0, which means there is no target in the cell under test. TABLE III.

MASKING LOOK-UP TABLE FOR N = 16 AND P = 12.

Binary decision d0 d1 d2 0 x x 1 0 x 1 1 0 1 1 1 1 1 1

d3 x x X 0 1

No. of interfering target (k) 0 1 2 3 4

Threshold (Tαk) Tα0 Tα1 Tα2 Tα3 Tα4

x , don’t care F. F-ACOSD Censoring module The F-ACOSD algorithm flow chart is shown in Fig. 10. In this algorithm, the number of interfering cells is decided by comparing Tck with X(p+k+1) for k < (N–p). If the number of interfering cells is determined, then the corresponding threshold value Tαk is compared with the cell under test X0. Parallelism has been also used in the F-ACOSD censoring module, as shown in Figs. 8 and 9. With such a setting, the number of multipliers and adders of the FACOSD censoring module are exactly the same as those of the B-ACOSD censoring module.

Figure 9. The F-ACOSD censoring block.

V.

SIMULATION AND REALIZATION

The overall SoC consists of an NIOS-II processor, onchip ROM, ACOSD processor, input/ROM interface, and output/RAM interface that have been implemented on a Stratix II EP2S60F672C3N FPGA chip. Using an Altera development board, the configured FPGA chip is connected to an SSRAM, Flash ROM, and other I/O’s. Since the SoC design is modular, each module can be tested individually. To verify the design, we performed four types of simulations: MATLAB® simulation, MATLAB® fixed-point simulation, Modelsim® functional simulation, and Quartus II® timing simulation. An input should take (L + 1) /2 clocks to be positioned in the center of the tapped delay line surrounded by N reference cells and G guard cells. The sorting module needs N clocks to sort N reference cells. The clock needed by the censoring module is independent of the number of reference cells. The pipeline architecture in the censoring module only needs one clock for the parallel multiplier, one clock for the parallel adder, one clock for the first comparator, one clock for masking, and one clock for the final comparator. TABLE IV. Module Shift Register Sorting Censoring Module TOTAL

Figure 8. Parallelism in F-ACOSD architecture.

In the masking stage, the F-ACOSD censoring module also uses Table III as the look-up table. Unlike the B-ACOSD module, the comparator inputs producing the binary mask codes in the F-ACOSD module are reversed. For N = 16 and p = 12, the values of Tc0, Tc1 , Tc2 ,and Tc3

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CLOCK NEEDED FOR EACH MODULE Clock needed (L+1)/2 N 5 (L+1)/2+N+5

Clock needed for (N,p)=(16,12) 10 16 5 31

The FPGA implementation results for SoC with N = 16 and p = 12 shows that the processor can achieve a maximum operating frequency of 115.13 MHz. Because 31 clocks are needed for one CFAR detection, this implies that the processing time to perform a single run is 0.26 μs. This processing time capability is well suited for a wide class of high-resolution radars [6]. The ACOSD SoC also efficiently uses the FPGA hardware resources. The SoC only needs 9% of the combinational ALUTs, 15% of the dedicated logic registers, 12% of the memory elements, and 35% of the DSP elements, as shown in Table V.

193

ISSN: 2088-6578

TABLE V.

Yogyakarta, 12 July 2012

HARDWARE IMPLEMENTATION RESULTS

Elements Combinational ALUT’s Dedicated Logic Registers Total Register Total memory bits DSP block 9-bit elements Total PLL’s

VI.

Number of Elements 4,312/48,352 (9%) 7,077/48,352 (15%) 7251 313,600/2,544,192 (12%) 100/288 (35%) 1/6 (17%)

CONCLUSION

In this work, we presented a novel hardware implementation of a CFAR processor for radar target detection. To the best of our knowledge, this work is the first to address real-time implementation of CFAR processors for high-resolution radars well adapted for desert environments. The proposed architecture is implemented as an SOC integrating a Nios II Core processor, on-chip ROM, and ACOSD IP core using Avalon Switch Fabric. Our hardware architecture implemented with VHDL language exploits inherent parallelism and pipelining techniques that are well-suited for the real time character of the radar detection process. The system consumes only little FPGA resources in terms of on-chip memories, registers and less than 10 % of LUTs. The NIOS-II processor is implemented with a simplified version because it is dedicated only to managing the communication between the target detector IP and external memories, with the flash ROM and SSRAM working as an input to and output from the system, respectively, for test and validation. Only 3% of the overall FPGA resources are consumed by the Nios-II processor, whereas the DSP component requires about 35% of the FPGA resources. For timing features, the ACOSD architecture allows detection of each cell under test within a delay of 0.26 μs for each cell and a total delay of 0.26 ms for a data set of 1000 samples, which meets the real-time requirement of a wide range of highresolution radars. REFERENCES [1]

[2]

[3]

[4]

[5] [6]

[7]

[8]

194

F. Gini, A. Farina, and M. Greco, "Selected list of references on radar signal processing," IEEE Trans. Aerospace and Electronic Systems, vol. AES-37, no. 1, pp. 329-360, Jan. 2001 F. Pascal,J.P. Ovarlez, M. Lesturgie C. Y. Chong, "MIMO radar detection in non-Gaussian and heterogeneous clutter," IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 1, pp. 115-126, Feb. 2010. D. Orlando, and G. Ricci F. Bandiera, "CFAR detection strategies for distributed targets under conic constraints," IEEE Trans. Signal Processing, vol. 57, no. 9, pp. 3305-3316, Sept. 2009. A. De Maio, S. De Nicola, and A. Farina, "GLRT versus MFLRT for adaptive CFAR radar detection with conic uncertainty," IEEE Signal Processing Letters, vol. 16, no. 8, pp. 707-710, Aug. 2009. M. Barkat, Signal detection and estimation. MA: Artech House, 2005. M. N. Almarshad, M. Barkat, and S. A. Alshebeili, "A Monte Carlo simulation for two novel automatic censoring techniques of radar interfering targets in log-normal clutter," Signal Processing, vol. 88, no. 3, pp. 719-732, Mar. 2008. H. M. Finn and R. S. Johnson, "Adaptive detection mode with threshold control as a function of satially sampled clutter-level estimates," RCA Review, vol. 29, no. 3, pp. 414-464, Sep. 1968. J. T. Rickard and G. M. Dillard, "Adaptive detection algorithms for multiple target situations," IEEE Trans. Aerospace and Electronic Systems, vol. AES-13, no. 4, pp. 338-343, Jul 1977.

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

CITEE 2012

H. Rohling, "Radar CFAR thresholding in clutter and multiple target situations," IEEE Trans. Aerospace and Electronic Systems, vol. AES-19, no. 4, pp. 608-621, Jul. 1983. P. P. Gandhi and S. A. Kassam, "Analysis of CFAR processors in nonhomogeneous background," IEEE Trans. Aerospace and Electronic Systems, vol. AES-24, no. 4, pp. 427-445, Jul. 1988. J. B. Billingsley, A. Farina, F. Gini, Greco M. V., and Verrazzani, "Statistical analysis of measured radar ground clutter data," IEEE Trans. Aerospace and Electronic Systems, vol. 35, no. 2, pp. 579592, Apr. 1999. S. Watts, "Cell-averaging CFAR gain in spatially correlated Kdistributed clutter," IEE Proceedings, Radar, Sonar, and Navigation, vol. 143, no. 55, pp. 321-327, Oct. 1996. G. Davidson, H. D. Griffiths, and S. Ablett, "Analysis of highresolution land clutter," IEE Proceedings, Vision, Image and Signal Processing, vol. 151, no. 1, pp. 86-91, Feb. 2004. D. A. Shnidman, "Generalized radar clutter model," IEEE Trans. Aerospace and Electronic Systems, vol. 35, no. 33, pp. 857-865, Jul. 1999. E. Conte, A. De Maio, and C. Galdi, "Statistical analysis of real clutter at different range resolutions," IEEE Trans. Aerospace and Electronic Systems, vol. 40, no. 3, pp. 903-918, Jul. 2004. M. Guida, M. Longo, and M. Lops, "Biparametric CFAR procedures for lognormal clutter," IEEE Trans. Aerospace and Electronic Systems, vol. 29, no. 33, pp. 789-809, Jul. 1993. R. Cumplido, C. Torres, and Lopez. S., "A configurable FPGAbased hardware architecture for adaptive processing of noisy signals for target detection based on Constant False Alarm Rate (CFAR) algorithms," in Global Signal Processing Conference, Santa Clara, CA, 2004, pp. 214-218. M.L.Bencheikh B. Magaz, "An Efficient FPGA Implementation of the OS-CFAR Processor," in International Radar Symposium, Wroclaw, 2008, pp. 1-4. R. Cumplido, C. Torres, and S. Lopez, "On the implementation of an efficient FPGA-based CFAR processor for target detection," in 1st International Conference on Electrical and Electronics Engineering, Acapulco, Mexico, 2004, pp. 214-218. R. Cumplido, C. Uribe and F. Del Campo R. Perez, "A versatile hardware architecture for a constant false alarm rat processor based on alinear insertion sorter," Digital Signal Processing, vol. 20, pp. 1733-1747, 2010. J. K. Ali, and Z. T. Yassen T. R. Saed, "An FPGA-based implementation of CA-CFAR processor," Asian Journal of Information Technology, vol. 6, no. 4, pp. 511-514, 2007. S. A. Alshebeili, and M. Alamar A. M. Alsuwailem, "Design and implementation of a configurable real-time FPGA-based TMCFAR processor for radar target detection," Journal of Active and Passive Electronic Devices, vol. 3, no. 3-4, pp. 241-256, 2008. V. Winkler, J. Detlefsen, U. Siart, J. Buchlert, and M. Wagner, "FPGA-based signal processing of an automotive radar sensor," in First European Radar Conference, Amsterdam, 2004, pp. 245-248. S. A. Alshebeili, M.H. Alhowaish, and S.M Qasim A. M. Alsuwailem, "Field programmable gate array-based design and realization of automatic censored cell averaging constant false alarm rate detector based on ordered data variability," IET Circuits, Devices & Systems, vol. 3, no. 1, pp. 12-21, Feb. 2009. R. Djemal, "A real-time FPGA-based implementation of target detection technique in non-homogenous environement," in Design and Technology of Integrated System in Nanoscale Era (DTIS), Hammamet- Tunisia, 2010, pp. 1-6. R. Djemal and S. Alshebeili I. Rosyadi, "Design and Implementation of Real-time Automatic Censoring Systen on Chip for Radar Detection," in World Academic of Science, Engineering and Technology (WASET), Penang - Malaysia, 2009, pp. 318-324. G. M. Blair, "Low cost sorting circuit for VLSI," IEEE Trans. Circit System, vol. 43, no. 6, pp. 515-516, 1996.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

On The Influence of Random Seeds Evaluation Range in Generating a Combination of Backpropagation Neural Networks Linggo Sumarno Faculty of Science and Engineering Sanata Dharma University, Yogyakarta 55282, Indonesia Email: [email protected] Abstract—A classifier combination, which makes use of several backpropagation neural networks can be generated by using the same architecture but different weight sets. Those neural networks were generated by means of random seeds evaluation when they were trained. An experiment has been carried out by generating a number of backpropagation neural networks by means of random seeds evaluation. Based on the experiment it was shown that, in general, if the number of neural networks were getting bigger, the wider range of random seeds evaluation was needed. As a lower limit, a random seeds evaluation range 1 to 20 is needed, if 3 or 4 neural networks were chosen. Keywords: classifiers combination, backpropagation neural network, random seed, evaluation range

I.

INTRODUCTION

Before 1990, in pattern recognition studies, only one classifier was usually used to solve a classification problem. Since early 1990’s, an idea emerged that in a pattern recognition not only one classifier but also several classifiers could be used together. In accordance with it, the idea to use the classifier combination methods has been expanding. The research domain of the classifiers combination methods examine how several classifiers can be applied together in order to obtain the better classification systems. It was shown that the classifier combination methods might improve the recognition performance in difficult pattern recognition problems [1], [2]. The classifier combination methods may also be used to increase the speed of the systems [3], [4] or to reduce the time taken for the design of classification system [5].

II.

THEORY

A. Classifier Combination In order to improve the performance of pattern recognition applications, classifier combination methods have proved to be an effective tool. In terms of classifier combination members, theoretical research by Hansen and Salomon [7] and also Krogh and Vedelsby [8], as well as empirical research by Hashem [9] and Optiz [10] have demonstrated that the best combination is combination of several different classifiers. There were no advantages in combining several identical classifiers. In order to generate several different classifiers, it can be carried out by only based on a base classifier. This generation can be carried out by changing the training sets [11], changing the input features [12], [10], or changing the parameters and architecture of a base classifier [13]. B. Backpropagation Neural Network A neural network is as a computational structure that consists of parallel interconnections of neural processors, which have adaptation ability. Backpropagation neural network is a neural network that commonly used. Fig. 1 shows an example of a backpropagation neural network that used in this research. It consists of C0 input unit, C1 and C2 neurons in the hidden layer 1 and 2 respectively, and also C3 neurons in the output layer.

In the classifier combination methods, there are two issues need to be considered. The first one is how the classifiers are generated, and the second one is how they are combined. Sumarno [6] has addressed the first issue, i.e. how a combination of classifiers which made use of backpropagation neural networks are generated by means of random seeds evaluation. In this paper, it will be discussed further by exploring the influence of random seeds evaluation range in generating a combination of backpropagation neural networks.

Figure. 1. An example of a backpropagation neural network with two hidden layers.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

195

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

III. 1) Training: Neural network in Fig. 1 needs to be trained first, before it can be used in recognizing the input. Resilient propagation [14] is one of many training algorithms to train it. 2) Initial weights: One thing that carried out during the early step neural network training is assigning the initial weights of the neurons. The choice of the weights will influence the convergence rate or even the convergence failure of the neural network training [16]. a)

If the initial weights are set at the same values, the resulted error values will be constant over all training period. This situation will cause the neural network training trapped in the saturation that resist weights changing. Therefore, it can be judged that a convergence failure has been happened.

b) If the initial weights are set at the different values (however they are inappropriate), it will cause a phenomenon called premature saturation [15]. This phenomenon refers to a situation where the resulted error values almost constant over a training period. This phenomenon cannot be judged as a local minimum, since error value will be reduced after that almost constant period over. This premature saturation phenomenon will slow down the convergence rate. In order to avoid the two things above, in general practice, researchers used initial weights from random numbers that uniformly distributed in the small range [16]. 3) Random Numbers and Random Seeds: One of computer property is deterministic property. Therefore, it cannot generate the real random numbers. Computer uses a pseudorandom generator, in order to mimic the real random number generator. By using this kind of generator, it can be generated a series of exact pseudorandom numbers, as long as the generator is initialized using the same initial number. This initial number called random seed. When a process that make use a series of pseudorandom numbers is executed, it is possible to get an identical track record of the process. The neural network also makes use a series of pseudorandom numbers in the training process. Therefore, it is possible to get an identical track record of the training process, even though the training process is repeated again. On the other hand, by using a different series of pseudorandom numbers in the training process, it is possible to get a different track record of the training process. In the neural network training, a different track record means a different neural network, since it will has a different performance.

RESEARCH METHODOLOGY

A. Materials and Equipments Materials in this research are isolated handwritten characters in binary format. These materials came from data acquisition sheets scanned at resolution of 300 dpi. The data were taken from 100 writers, from various levels of age (10 to 70 years) and sex. From 100 writers, each of them wrote 78 characters, which divided into three groups where each group consists of ‘a’ to ‘z’ characters. Therefore, there were 7,800 isolated character images. Equipments in this research was a set of computer based on processor Intel Core2Duo E7500 (2,93GHz) and 4GB RAM, that equipped with MATLAB software. B. System Development By using materials and equipments above, a system of handwritten character recognition has been developed (see Fig. 2). In that system, the input is an isolated character image in binary format, whereas the output is a character in the text format. 1) Character Normalization: Character normalization in Fig. 2 is carried out in order to correct problems of slant, size, shift, and stroke-width. In this research character normalization from Sumarno [17] was used. Fig. 3 shows some steps in character normalization. In Fig. 3, the input is an isolated handwritten character in binary format, whereas the output is normalized handwritten character in binary format which has 64x64 pixels in size. Sumarno [17] suggested the following parameters. a)

Slant correction was carried out by using evaluation of vertical projection histogram of handwritten character that had been undergone shearing operation by using shearing coefficients {-0.4, -0.35, … , 0.4}.

b) Character scaling was set to 48x48 pixels. c)

The template size was set to 64x64 pixels.

d) Thinning operation used thinning algorithm from Zhang-Suen [18]. e)

Dilation operation used square structure-element 3x3.

2) Feature Extraction: Feature extraction is a process to extract features that exist in each of the character image. In this research feature extraction from Sumarno [17] was used. Fig. 4 shows some steps in feature extraction.

Figure. 2. A character recognition system.

196

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Figure. 4. Feature extraction steps.

Patterns in training and testing Patterns that used in training and testing the neural network were images of isolated handwritten character, which come from 100 persons that further processed into three pattern sets as follows [17]. a) Figure. 3. A character normalization steps.

In Fig. 4, the input is normalized character in binary format that has 64x64 pixels in size. The output is a set of values that represents the input image that has 64 elements. Sumarno [17] suggested the following parameters for the feature extraction steps. a)

Low-pass filtering used 2D Gaussian filter 14x14 with standard deviation 10.

b) Block partition used 8x8 pixels block size. 3) Character Recognition: Character recognition is a process to recognize a character. In order to recognize it, a recognition method based on a backpropagation neural network was used. Backpropagation neural networks in this research are described in detail as follow [17]. a)

Neural network with 2 hidden layers was chosen. It was chosen because after the evaluation of neural network with 1 and 2 hidden layers, the neural network with 2 hidden layers gave the highest recognition rate.

Remarks i) Every group consists of 2,600 patterns. ii) Corrected patterns are original patterns that have undergone slant, size, shift, and stroke width corrections.

iii)

Output layer has 26 neurons that correspond with the number of alphabet characters ‘a’ to ‘z’. Transfer function in this layer is unipolar sigmoid, that correspond with the network output i.e. in the range of 0 to 1.

d) The hidden layers 1 and 2 have 64 and 312 neurons respectively. The number of neurons in each hidden layer were found from an evaluation procedure, where by using 64 and 312 neurons in hidden layer 1 and 2 respectively, it gave the highest recognition rate. e)

Transfer functions in each hidden layer is bipolar sigmoid, that correspond with internal data processing in neural network which in the range -1 to 1.

Remarks a) Sigmoid function is a function that commonly used in a backpropagation neural network [20]. b) Training of a backpropagation neural network can be more effective by using bipolar data processing in the range of -1 to 1 [19].

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

It was assumed that the rotation in input patters is in the range of -100 to 100.

b)

Validation set This set also used in training (in stopping the training process). They consist of 2,600 corrected patterns from group 2.

c)

Test set This set used in testing the trained neural network. They consist of 2,600 corrected patterns from group 3.

b) Input layer has 64 neurons that correspond with the number of feature extraction elements. c)

Training set This set used in training (in updating the neuron’s weights). This set consists of 13,000 patterns as follows. i) There are 2,600 corrected patterns from group 1. ii) There are 5,200 corrected patterns from group 2. They come from corrected patterns from group 2 that rotated -50 and 50. iii) There are 5,200 corrected patterns from group 3. They come from corrected patterns from group 3 that rotated -100 and 100.

Training algorithm The neural network was trained by using resilient backpropagation algorithm [14]. This algorithm is the fastest algorithm for pattern recognition [21]. Stopping criterion in training was made use of validation, in order to avoid under-training or over-training. Pseudorandom and random seed Neural network needs random initial weights in order to start the training. The computer generates these initial weights. However, since the computer cannot generate the real random numbers, therefore the pseudorandom numbers were used as initial weights. In this research, that pseudorandom numbers have the following properties.

197

ISSN: 2088-6578

Yogyakarta, 12 July 2012

a) Distribution : uniform [16]. b) Range : -1 to 1 (since bipolar sigmoid function has limit numbers -1 and 1). c) Repeatability : 21492 (built-in in the MATLAB software). Initial weights in each layer of neural network should be different, in order to remove correlation between layers. Therefore, random seed values that used in generating pseudorandom numbers in each layer should also be different. In this research, random seed values that used in hidden layers 1, 2 and output layer were i (i are integer numbers), i+1 and i+2 respectively [6]. IV.

TESTING AND DISCUSSSIONS

Experiments below were carried out by using a number of backpropagation neural networks, which have the same architecture and also the same training sets. The difference in each experiment only in terms of random seeds evaluation range. In the first experiment, there were 5 neural networks that trained using random seeds evaluation range 1 to 5. Table I shows the results of the first experiment. Based on the Table I(B), if we choose top 3, 4, and 5 neural networks, they will give average recognition rate 87.68, 86.56, and 86.25 respectively. In the second experiment, Table I was expanded by training 10 neural networks using random seeds evaluation range 1 to 10. Table II shows the results of the second experiment. Based on the Table II(B), if top 3, 4, and 5 neural networks are chosen, they will give average recognition rate 87.08, 86.98, and 86.88 respectively. In the third experiment, Table II was expanded by training 15 neural networks using random seeds evaluation range 1 to 15. Table III shows the results of the third experiment. Based on the Table III(B), if top 3, 4, and 5 neural networks are chosen, they will give average recognition rate 87.26, 87.12, and 87.03 respectively. TABLE I.

(A) TRAINING RESULTS OF 5 NEURAL NETWORKS THAT TRAINED USING RANDOM SEEDS EVALUATION RANGE 1 TO 5; (B) SORTING OF (A) BASED ON RECOGNITION RATE IN DESCENDING MANNER

198

Neural network number 1 2 3 4 5

(A) Random seed 1 2 3 4 5

Neural network number 1 2 3 4 5

(B) Recognition rate (%) 87.39 87.15 86.69 85.00 85.00

Recognition rate (%) 87.39 87.15 85.00 86.69 85.00

Random seed 1 2 4 3 5

CITEE 2012

In the fourth to sixth experiments there were 15, 20, 25, and 30 neural networks that trained using different random seeds evaluation ranges. They were trained using random seeds evaluation ranges of 1 to 20, 1 to 25, and 1 to 30 respectively. The results of top 3, 4, and 5 chosen neural networks in terms of average recognition rate are shown in the Table IV. The results of random seed evaluation ranges 1 to 5 and 1 to 10 are shown also in the Table IV. TABLE II.

(A) TRAINING RESULTS OF 10 NEURAL NETWORKS THAT TRAINED USING RANDOM SEEDS EVALUATION RANGE 1 TO 10; (B) SORTING OF (A) BASED ON RECOGNITION RATE IN DESCENDING MANNER. Neural network number

(A) Random seed

Recognition rate (%)

1 2 3 4 5 6 7 8 9 10

1 2 3 4 5 6 7 8 9 10

87.39 87.15 85.00 86.69 85.00 86.38 85.62 86.69 86.31 86.46

Neural network number

(B) Recognition rate (%)

Random seed

1 2 3 4 5 6 7 8 9 10

87.39 87.15 86.69 86.69 86.46 86.38 86.31 85.62 85.00 85.00

1 2 4 8 10 6 9 7 3 5

TABLE III. (A) TRAINING RESULTS OF 15 NEURAL NETWORKS THAT TRAINED USING RANDOM SEEDS EVALUATION RANGE 1 TO 15. Neural network number

(A) Random seed

Recognition rate (%)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

87.39 87.15 85.00 86.69 85.00 86.38 85.62 86.69 86.31 86.46 82.08 85.62 85.62 86.00 87.23

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

TABLE IV. AVERAGE RECOGNITION RATE OF CHOSEN NEURAL NETWORKS (IN %)

TABLE III. (CONTINUED) (B) SORTING OF (A) BASED ON RECOGNITION RATE IN DESCENDING MANNER. Neural network number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(B) Recognition rate (%) 87.39 87.23 87.15 86.69 86.69 86.46 86.38 86.31 86.00 85.62 85.62 85.62 85.00 85.00 82.08

Random seed 1 15 2 4 8 10 6 9 14 7 12 13 3 5 11

Based on the Table IV, it can be seen that if 3 or 4 neural networks are chosen, at least a random seeds evaluation range 1 to 20 is needed, whereas if 5 to 7 neural networks are chosen, at least a random seeds evaluation range 1 to 25 is needed. These cases are happened because certain random seed values that give high recognition rates are randomly distributed. However, there are limits in its distribution range. For example, for 3 and 4 neural networks, the limit is at 20 random seeds (for random seeds evaluation range 1 to 20), whereas for 5 to 7 neural networks, the limit is at 25 random seeds (for random seeds evaluation range 1 to 25). Based on the Table IV, in general, it can be said that if more neural networks are chosen, the wider range of random seeds evaluation is needed. As a lower limit, a random seeds evaluation range 1 to 20 is needed, if 3 or 4 neural networks are chosen. V.

CONCLUSION

Based on the above discussions, there are two conclusions as follow. a)

A new study about the influence of random seeds evaluation range in choosing a number of backpropagation neural networks as classifier combination members has been carried out. In this case, these neural networks have the same architecture but different weight sets.

b) In general, it can be said that if more neural networks are chosen, the wider range of random seeds evaluation is needed. As a lower limit, a random seeds evaluation range 1 to 20 is needed, if 3 or 4 neural networks are chosen.

ISSN: 2088-6578

Number of chosen neural networks 3 4 5 6 7 8 9 10

Random seeds evaluation range 1 to 5

1 to 10 1 to 15 1 to 20 1 to 25 1 to 30

87.08 86.56 86.25 -

87.08 86.98 86.88

87.26 87.12 87.03

87.28 87.25 87.22

87.28 87.25 87.23

87.28 87.25 87.23

86.79 86.72 86.59 86.41 86.27

86.94 86.86 86.79 86.70 86.59

87.13 87.07 87.01 86.95 86.89

87.21 87.16 87.11 87.06 87.02

87.21 87.16 87.13 87.08 87.05

Remarks: random seed evaluation range 1 to 5, 1 to 10, 1 to 15, 1 to 20, 1 to 25, and 1 to 30 are correspond with the number of evaluated neural networks 5, 10, 15, 20, 20, and 30 respectively.

REFERENCES [1] J. Kittler and F. Rolli, editors, Proceeding of the First International Workshop on Multiple Classifier Systems, Cagliari, Italy, 2000. [2] J. Kittler and F. Rolli, editors, Proceeding of the Second International Workshop on Multiple Classifier Systems, Cambridge, UK, 2001. [3] Y. Chim, A. Kasim, and Y. Ibrahim, “Dual classifier system for handprinted alphanumeric character recognition”, Pattern Analysis and Applications, vol. 1,1998, pp. 155-162. [4]

L. Lam and C. Y. Suen, “Structural classification and relaxation matching of totally unconstrained handwritten zip-code numbers”, Pattern Recognition, vol. 21, no. 1, 1998, pp. 19-31.

[5]

S. Gunter, Multiple Classifier Systems in Offline Cursive Handwriting Recognition, PhD Thesis, Institute for Computer Science and Applied Mathematics, University of Bern, German, 2004, unpublished.

[6]

L. Sumarno, “Generating a combination of backpropagation neural networks by means of random seeds evaluation”, Proceeding of 12th International Conference on QiR (Quality in Research), Bali, Indonesia, 2011, pp. 282-287.

[7]

L. Hansen, and O. Salomon, “Neural network ensembles”. IEEE Transactions on Pattern Recognition and Machine Analysis, vol 12, no. 10, 1990, pp. 993-1001.

[8]

A. Krogh, and J. Vedelsby, “Neural networks ensembles, cross validations, and active learning”, Advances in Neural Information Processing Systems, G. Tesauro et.al. (editor), MIT Press, vol. 7, 1995, pp. 231-238.

[9] S. Hashem, “Optimal linear combinations of neural networks”, Neural Networks, vol. 10, no. 4, 1997, pp. 599-614. [10] D. W. Optiz, “Feature selection for ensembles”, Proceeding of 16th International Conference on Artificial Intelligence, 1999, pp. 379-384. [11] L. Breiman, “Bagging predictors”, Machine Learning, vol. 24, no. 2, 1996, pp 123 – 140. [12] T.K. Ho, “The random subspace method for constructing decision forests”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 8, 1998, pp.832 – 844. [13] D. Partridge, and Y. B. Yates, “Engineering multiversion neuralnet systems”, Neural Computation, vol. 8, no. 4, 1996, pp. 869893. [14] M. Riedmiller, and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm”, Proceedings of the IEEE International Conference on Neural Networks, 1993, pp. 586-591. [15] Y. Lee, S. Oh, and M.Kim, “The effect of initial weights on premature saturation in backpropagation learning”, International

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

199

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Joint Conference on Neural Networks, Seattle, Washington, USA, vol. 1, 1991, pp.765-770. [16] S. Haykin, Neural Networks : A Comprehensive Foundation, New Jersey: Prentice-Hall International Inc., 1994. [17] L. Sumarno, “On the performance of blurring and block averaging feature extraction based on 2D Gaussian filter”, Proceeding of 5th International Conference on Information and Communication Technology and Systems (ICTS), Surabaya, Indonesia, 2009, pp. 261-266. [18] T.Y. Zhang, T.Y. and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns”, Communication of The ACM, vol. 27, no. 3, 1984, pp. 236-239. [19] I. Suhardi, Evaluation of Artificial Neural Network for Handwritten Character Recognition Handprinted Style (Evaluasi Jaringan Syaraf Tiruan untuk Pengenalan Karakter Tulisan Tangan Jenis Cetak), Master Thesis, Electrical Engineering, Gadjah Mada University, Yogyakarta, Indonesia, 2003, unpublished. [20] L. Fausett, Fundamentals of Neural Networks, New Jersey: Prentice Hall International Inc., 1994. [21] The Mathworks Inc., Neural Network Toolbox: For Use With MATLAB, Version 5, Massachussets: The Mathworks Inc., 2005.

200

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Analisis EEG Menggunakan Transformasi Fourier Waktu-singkat dan Wavelet Kontinu: Studi Kasus Pengaruh Bacaan al Quran Agfianto Eko Putra1, Putrisia Hendra Ningrum Adiaty2 1,2

Program Studi Elektronika dan Instrumentasi Jurusan Ilmu Komputer dan Elektronika, Fak. MIPA, Universitas Gadjah Mada

ABSTRACT- It has been analyzed the influence of Al Quran reading in EEG recordings using Short Time Fourier Transform (STFT) and Continous Wavelet Transform (CWT). The data which has been analyzed is EEG recordings of 5 men aged 20-30 years. Each subject experienced three stage of treatment, silent condition, listening to the readings of the Al Quran, and then silent condition again. EEG recording for each subject is 30 minutes long. However, the data which has been analyzed is the transition between silent condition and listening to the Al Quran reading, its about at time 8 to 10 minute. The data then analyzed using STFT based on Hamming Window with length of 128 and CWT based on Coiflet-5. The results shows for all the subjects which indicates the right brain acivities, dominated by delta wave, and after listening the Al Quran reading, the power of delta wave is increase. Beside delta wave, there are also appear theta dan alpha waves. The verses of the Al Quran which is read in the treatment are Al A’raaf 40-47, Al Baqarah 255-257 and 285-286.

psychological and medical practices for stress management as well as a variety of physical and mental disorders”. Sehingga diperlukan riset untuk menguatkan hipotesa bahwa mendengarkan bacaan Al Qur’an tidak sekedar mendapatkan pahala, karena bentuk lain dari ibadah, namun juga bisa membuat objek yang bersangkutan mengalami relaksasi dan sekaligus meditasi.

Keywords: EEG, STFT, CWT, Al Qur’an

I.

PENDAHULUAN

EEG, atau Electroencephalography, adalah hasil gambaran kerja otak saat seseorang melakukan suatu tugas. Hal ini memungkinkan kita untuk mendeteksi lokasi dan besarnya aktivitas otak yang terlibat dalam berbagai jenis fungsi aktivitas yang dipelajari. EEG memungkinkan kita untuk melihat dan merekam perubahan dalam aktivitas otak pada saat anda melakukan suatu tugas. Pengukuran dilakukan menggunakan elektroda untuk memantau jumlah aktivitas listrik di berbagai titik di kulit kepala Anda, sebagaimana ditunjukkan pada Gambar 1. Pengukuran EEG adalah non-invasif dan tidak melibatkan sinar-X, radiasi atau suntikan. EEG telah digunakan selama bertahun-tahun dan dianggap sangat aman dan penggunaan elektroda tanpa menghasilkan sensasi apapun.Sedikit kemerahan mungkin terjadi di lokasi dimana elektroda ditempatkan, tetapi ini akan hilang setelah beberapa jam. Namun, mungkin ada risiko tergantung pada kondisi medis tertentu. Dalam agama Islam, mendengarkan bacaan Al Quran merupakan bentuk relaksasi dan meditasi. Hal ini berhubungan dengan pola gelombang otak atau EEG yang telah dijelaskan sebelumnya. Newberg dan Iversen (2003) menegaskan dalam pernyataannya “Meditation is a complex mental process involving changes in cognition, sensory perception, affect, hormones, and autonomic activity. Meditation has also become widely used in

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Gambar 1. Pengukuran EEG menggunakan elektroda Sebuah sistem pengolahan data rekam EEG yang akan digunakan untuk mengolah dan menganalisis data rekam menggunakan STFT dan TWK. Penggunaan STFT untuk mendapatkan spektrum sinyal (spektrogram) yang berisikan informasi mengenai apa dan kapan terjadinya suatu frekuensi sinyal. Penggunaan TWK akan memberikan gambaran pola-pola khusus dari data EEG. II.

PERANCANGAN SISTEM

A. Diagram Alir Analisis EEG Diagram alir sistem analisis EEG merupakan panduan implementasi perancangan dan pembuatan program yang terkait. Gambar 2 menunjukkan diagram alir dari sistem analisis EEG berbasis Transformasi Fourier STFT dan Transformasi Wavelet Kontinu.

201

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Sebagaimana dituturkan oleh dr Yudianta, salah seorang ahli syaraf di Rumah Sakit Dr. Sardjito, bahwasanya untuk penelitian non-medis maka dapat dilakukan dengan hanya menganalisis 8 kanal saja. Selain itu, cukup dengan 8 kanal seseorang itu dapat digunakan untuk analisis gangguan tidur seseorang.

Gambar 2. Diagram alir sistem analisis EEG Data yang dianalisis diperoleh dari mesin perekam EEG dengan frekuensi pencuplikan 250 Hz (1 detik terdapat 250 data). Data rekaman EEG ini kemudian dilakukan FFT untuk melihat kandungan frekuensinya, juga dilakukan proses STFT untuk melihat dengan lebih jelas kapan suatu frekuensi itu muncul. Perhitungan power atau daya sinyal dilakukan agar nantinya semua informasi yang diperoleh dari FFT dan STFT digabungkan sehingga diperoleh informasi yang diinginkan. Transformasi Wavelet dilakukan untuk menguatkan temuan-temuan dari proses FFT dan STFT tersebut. B. Data-data EEG Sinyal diklasifikasikan berdasarkan frekuensinya, sebagaimana telah dijelaskan sebelumnya. Frekuensi diatas 12 Hz merupakan gelombang beta, frekuensi antara (8-12) Hz merupakan gelombang alpha, frekuensi (4-8) Hz merupakan gelombang theta dan frekuensi dibawah 4 Hz merupakan gelombang delta (Jahankhani, dkk, 2006). Pola-pola yang diperoleh pada STFT dan TWK kemudian akan dibandingkan untuk saling menguatkan. Penelitian ini hanya mengambil data dari 8 kanal yaitu dari kanal c3, c4, p3, p4, o1, o2, f3 dan f4 sebagaimana ditunjukkan pada Gambar 3 Hal ini dikarenakan pada dasarnya standar internasional yang menggunakan 21 kanal dalam perekaman data EEG merupakan standar medis untuk mengetahui seseorang mengidap penyakit epilepsi atau tidak.

202

Gambar 3. Kanal yang digunakan pada perekaman gelombang EEG Data EEG yang diambil mengharuskan pasien untuk berbaring diatas tempat tidur dengan mata tertutup. Subyek 1 – subyek 5 yang diambil datanya dipasangi elektroda EEG yang ada di 21 bagian kepala. Pengambilan data diambil selama 30 menit per subyek dengan rincian sebagai berikut: 1.

2.

Menit (0-8),subyek dalam keadaan diam tanpa dikenai keadaan apapun; Menit (8-23),subyek dikenai perbuatan yakni didengarkannya muratal Al-Qur'an. Muratal AlQur'an yang didengarkan yakni Surat Al- A'raaf ayat 40-47 dan Surat Al-Baqarah 255-257 dan 285-286;

Menit (23-30),muratal Al-Qur'an dimatikan kemudian subjek kembali tidak di kenai perbuatan apapun hingga dilepasnya elektrode EEG dari kepala subyek. Data masukan yang digunakan pada analisis ini yakni data yang mempunyai durasi waktu 2 menit. Menurut dr. Yudianta bahwasanya hasil rekam EEG yang akan menjadi bahan analisis cukup mempunyai durasi waktu 10 detik. Namun, harus dapat mengetahui terlebih dahulu irama dasar dari gelombang EEG yang direkam. Untuk rekaman 2 menit tentu sudah cukup, karena standar minimalnya adalah 10 detik. Dari 30 menit hasil rekam EEG hanya difokuskan pada menit ke 8-10 karena pada menit tersebut merupakan

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

saat transisi subjek dari fase diam ke fase mendengarkan bacaan Al Qur'an. III.

HASIL DAN PEMBAHASAN

Tabel I, II, III dan IV menunjukkan rekapitulasi daya masing-masing gelombang pada tiap-tiap kanal. Pada tabel-tabel tersebut masing-masing ditunjukkan pengelompokkan dan rekapitulasi dari gelombang delta, theta, beta dan alpha untuk kelima subyek dan kedelapan kanal. Terlihat bahwa gelombang delta merupakan gelombang dengan daya terbesar pada keseluruhan subyek. Artinya kelima subyek memiliki gelombang delta sebagai gelombang dominan pertama yang muncul pada kedelapan kanal yang dianalisis. Pengelompokkan dan rekapitulasi untuk gelombang theta pada Tabel II menunjukkan bahwa subyek 3 memiliki daya gelombang theta yang paling besar sedangkan subyek 5 memiliki daya gelombang delta yang paling rendah dibandingkan subyek selainnya. Untuk gelombang theta, pada daerah frontal mendominasi otak bagian kanan sebesar 80%, pada daerah central gelombang theta juga mendominasi otak kanan juga sebesar 80%, pada daerah occipital mendominasi otak kanan sebesar 60%, dan untuk daerah parietal mendominasi otak kanan sebesar 100%. Tabel III menunjukkan pengelompokkan dan rekapitulasi gelombang alpha. Tabel tersebut menunjukkan bahwa subyek 3 memiliki daya terbesar dibandingkan dengan subyek selainnya. Sedangkan subyek 1 dan subyek 5 memiliki daya terkecil dari gelombang alpha dibandingkan dengan subyek lainnya. Untuk gelombang alpha tersebut, pada daerah frontal gelombang alpha mendominasi pada otak bagian kanan sebesar 80%, pada daerah central, gelombang alpha juga mendominasi otak bagian kanan sebesar 80%, pada daerah occipital, mendominasi otak bagian kanan sebesar 60% dan daerah parietal mendominasi otak bagian kanan juga sebesar 80%. Tabel IV berisi pengelompokkan dan rekapitulasi gelombang beta pada kelima subyek. Pada tabel ini tampak dari tabel bahwasanya subyek 2 memiliki daya gelombang delta terbesarsedangkan subyek 5 memiliki daya gelombang beta yang terkecil. Untuk gelombang beta ini, pada daerah frontal gelombang beta mendominasi otak bagian kanan sebesar 100%, untuk daerah central mendominasi otak bagian kanan sebesar 100%, pada daerah occipital mendominasi otak kanan sebesar 60% dan untuk daerah parietal mendominasi otak bagian kanan sebesar 60%. A. Subyek-1 Gambar 4 sampai dengan 13 menunjukkan spektrogram EEG dan hasil Transformasi Wavelet Kontinu dari masing-masing subyek untuk kanal-kanal tertentu. Keunikan terjadi pada kanal C4 subyek 1 (Gambar 4 dan 5), dimana gelombang delta pada subyek

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

ISSN: 2088-6578

1 ini muncul pada detik ke-0 (secara halus), padahal pada saat itu subyek belum dikenai perlakuan apapun yang artinya pada detik ke-0 subyek telah merasa nyaman. Gelombang delta muncul kembali dengan daya yang lebih besar pada detik ke-40 setelah subyek dikenai perlakuan. Walaupun daya gelombang alpha pada subyek 1 ini sangat lemah dibandingkan dengan gelombang deltanya, namun hal ini telah membuktikan bahwasanya bacaan ayat-ayat Al-Qur'an dapat memicu kemunculan gelombang alpha pada subyek 1. B. Subyek-2 Keunikan pada subyek 2 tampak pada spektogram dari kanal C4 seperti ditunjukkan pada Gambar 6 dan 7. Pada kanal C4 ini, subyek 2 memiliki dominasi daya gelombang delta yang paling kuat dibandingkan dengan kanal lainnya. Menurut Abdurahman, dkk (2007), C4 merupakan daerah central yang terletak pada belahan otak kanan digunakan untuk pengontrol gerakan-gerakan. Sehingga mengindikasikan pada subyek 2 mengalami relaksasi pada daerah central.. Sebaliknya, pada daerah pariental yakni pada pariental bagian kiri (P3), daya yang dihasilkan sangat kecil. Tampak pada spektogram Gambar 6, bahwasanya gelombang delta pada detik ke-30 nampak memiliki daya yang lebih kuat dari pada sebelum subyek 2 dikenai perlakuan pada detik-20. Walaupun sebelumnya gelombang delta telah muncul pada detik ke-0, kemunculan gelombang delta setelah subyek 2 dikenai perlakuan memiliki daya yang lebih besar daripada sebelumnya. Selain itu, gelombang alpha juga telah muncul pada detik ke-0. Terlihat pada Gambar 6 dan 7, kemunculan gelombang alpha ini memiliki daya yang relatif kecil. Tapi pada detik ke- 30 setelah subyek 2 dikenai perlakuan, daya gelombang alpha yang muncul memiliki daya yang lebih tinggi. Adanya penguatan daya gelombang delta setelah subyek dikenai perlakuan pada detik ke-30 ini mengindikasikan bahwa subyek 2 mengalami ketenangan dan kenyamanan sedangkan munculnya gelombang alpha mengindikasikan subyek 2 mengalami relaksasi dalam mendengarkan bacaan ayat-ayat Al-Qur'an. C. Subyek-3 Untuk subyek-3, terlihat pada spektrogam Gambar 8 dan hasil Transformasi Wavelet Kontinu pada Gambar 9, bahwasanya saat subyek dikenai perlakuan pada detik ke20, gelombang delta yang muncul setelahnya memiliki daya gelombang yang lebih besar daripada sebelumnya. Spektrogram Gambar 8 juga menunjukkan bahwa pada kedua kanal ini, gelombang alpha juga muncul setelah subyek dikenai perlakuan pada detik ke-20.

203

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Tampak pada spektrogram kanal P3, gelombang alpha yang muncul dari detik ke-20 sampai detik ke-120. Namun, untuk kanal P4, gelombang alpha yang muncul hanya sesaat setelah detik ke-20 dan beberapa detik kemudian gelombang alpha tidak muncul kembali sampai akhir perekaman. Adanya gelombang delta dan gelombang alpha pada subyek 3 setelah subyek dikenai perlakuan mengindikasikan bahwasanya subyek 3 mengalami relaksasi pada daerah parietal yang berfungsi sebagai pusat input sensoris. D. Subyek-4 Gambar 10 dan 11 memperlihatkan bahwasanya gelombang delta pada detik ke-20 mengalami penguatan daya dari sebelumnya. Pada spektogram ini juga diperlihatkan kemunculan gelombang alpha setelah detik ke-20 yang memiliki dominasi daya yang lebih kuat dibandingkan sebelum subyek dikenai perlakuan. Berdasarkan Gambar 10 yang memperlihatkan bertambahnya daya gelombang delta dan alpha setelah subyek dikenai perlakuan mengindikasikan bahwasanya subyek mengalami relaksasi sepenuhnya saat mendengarkan bacaan ayat-ayat Al-Qur'an. E. Subyek-5 Pada Gambar 12 dan 13 tampak gelombang alpha yang muncul pada kanal P3 memiliki daya gelombang alpha yang lebih randah dibandingkan dengan daya gelombang deltanya. Kanal O2 merupakan daerah occipital dimana fungsi pemroses penglihatan. Tampak pada spektogram kanal O2 pada Gambar 12.a, bahwasanya daya gelombang delta pada kanal ini mengalami kenaikan daya pada detik ke40. Namun, kenaikan daya gelombang delta pada kanal ini tidaklah stabil. Pada detik ke-40 ini juga terlihat kemunculan gelombang alpha setelah subyek dikenai perlakuan. Berdasarkan Gambar 12 yang memperlihatkan bertambahnya daya gelombang delta walaupun pertambahan ini tidak stabil dan kemunculan gelombang alpha pada detk ke-40 setelah subyek dikenai perlakuan mengindikasikan bahwasanya subyek mengalami relaksasi sepenuhnya saat mendengarkan murattal AlQur'an. IV.

KESIMPULAN

Hasil penelitian menunjukkan bahwa gelombang delta mendominasi kelima subyek yang mendengarkan bacaan ayat-ayat Al-Qur'an, hal ini terlihat atau dapat ditemukan pada saat transisi kondisi diam dan saat pembacaan. Pemberian perlakuan mendengarkan bacaan ayat Al-Qur'an membuat daya gelombang delta pada kelima subyek semakin bertambah. Hal ini mengindikasikan bahwasanya bacaan ayat-ayat AlQur'an dapat digunakan sebagai salah satu terapi

204

CITEE 2012

relaksasi yang memberikan ketenangan, ketentraman dan kenyamanan pada subyek yang mendengarkan. Gelombang alpha muncul setelah subyek dikenai perlakuan. Bacaan ayat-ayat Al-Qur'an juga dapat mengaktifkan otak kanan kelima subyek. Urutan dominasi gelombang EEG yang muncul saat dikenai perlakuan pada kelima subyek di kedelapan kanal yakni gelombang delta, theta, alpha dan gelombang beta kecuali pada subyek 4 yang memiliki dominasi gelombang theta pada kanal F3. Untuk daerah parietal belahan otak kiri, memiliki daya gelombang delta yang paling rendah untuk semua subyek. UCAPAN TERIMAKASIH Diucapkan terima kasih kepada Jurusan Ilmu Komputer dan Elektronika, Fakultas Matematika dan Ilmu Pengetahuan Alam, Universitas Gadjah Mada atas dukungan pendanaan penelitian ini di tahun 2011. Juga pihak Rumah Sakit Sardjito Yogyakarta atas ijin untuk menggunakan alat rekam EEG selama proses perekaman EEG berlangsung. DAFTAR PUSTAKA [1] Abdurrochman, A., Perdana, S., dan Andhika, S., 2008, Muratal Al-Qur'an : Alternatif Terapi Suara Baru, Prosiding Seminar Nasional Sains dan Teknologi II UNILA, 17-18 November 2008, V41V48. [2] Anant, K.S. dan Dowla, F.U., 1997, Wavelet Transform Methods for Phase Identification in Three-Component Seismograms, Bulletin of Seismological Society of America, Vol. 87, No. 6, halaman 1598 – 1612. [3] Hadikesuma, F., 2009, Studi Pengaruh Beberapa Aktivitas Harian terhadap Aktivitas Kelistrikan Otak pada Hasil Rekam EEG menggunakan FFT, Skripsi S1 Elektronika dan Instrumentasi, Jurusan Fisika FMIPA UGM, Yogyakarta. [4] Jahankhani, P., Kodogiannis, V. dan Revett, K., 2006, EEG Signal Classification Using Wavelet Feature Extraction and Neural Networks, IEEE John Vincent Atanasoff 2006 International Symposium on Modern Computing (JVA'06). [5] Newberg, A.B. dan Iversen, J., 2003, The Neural Basis for The Complex Mental Task of Meditation: Neurotransmitter and Neurochemical Considerations, Medical Hyphotheses, Elsevier Science, hal. 282-291. [6] Polikar, R., 1996, The Wavelet Tutorial Part I – IV, sumber dari internet (alamat website http://www.public.iastate.edu/~rpolikar/WAVELETS /WTtutorial.html). [7] Putra, A.E. dan Atmaji, C., 2010, Analisis Data EEG pada Beberapa Kondisi menggunakan Metode Dekomposisi dan Korelasi berbasis Wavelet (Dekorlet), Proceeding of Conference on Information Technology and Electrical Engineering (CITEE) 2010, Faculty of Engineering, Electrical Engineering, UGM, Yogyakarta. [8] Reza, A.M., 1999, Wavelet Characteristics, What Wavelet Should I Use?, Xilinx Inc.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

[9] Sanei, S. dan Chambers, J.A., 2007, EEG Signal Processing, UK John Willey & Son Ltd., Wet Sussex.

ISSN: 2088-6578

[10] Wiliston, K, 2009, Digital Signal Processing World Class Design, Newness Elsevier Inc., Oxford, UK.

Tabel I. Pengelompokkan dan Rekapitulasi Gelombang Delta Daya Kanal yang dianalisis

Jenis Subyek

F3

F4

C3

C4

O1

O2

P3

P4

Subyek 1

1846.21

2873.84

2418.92

2808.32

2508.58

2628.51

2628.51

2296.93

Subyek 2

7967.12

10683.5

6327.34

22961.7

16564.1

16849.8

12468.3

14193.4

Subyek 3

8910.1

80660

68188

84118.5

95922.6

103005

88572.7

96642.6

Subyek 4

535.1

4565.12

4613.28

7627.87

11473.6

463328

6604.32

5432.03

Subyek 5

4811.36

1996.01

4613.3

4650.24

4118.76

5033.4

2142.58

5035.34

Tabel II. Pengelompokkan dan Rekapitulasi Gelombang Theta Daya Kanal yang dianalisis

Jenis Subyek

F3

F4

C3

C4

O1

O2

P3

P4

Subyek 1

391.96

783.4

884.29

984.74

643.85

738.53

738.53

796.82

Subyek 2

1813.57

1739.1

1996.16

2059.4

2540.52

2287.23

2264.6

2402.67

Subyek 3

1994.4

7959.66

7662.19

8209.51

8603.53

8999.59

8302.92

8949.5

Subyek 4

1119.08

1325.55

1000.73

723.76

1006.29

1186.09

1094.05

1107.39

Subyek 5

246.51

287.49

241.96

385.46

443.61

360.2

249.37

442.4

Tabel III. Pengelompokkan dan Rekapitulasi Gelombang Alpha Daya Kanal yang dianalisis

Jenis Subyek

F3

F4

C3

C4

O1

O2

P3

P4

Subyek 1

149.18

276.8

246.38

378.75

240.82

223.18

223.18

237.86

Subyek 2

987.91

971.89

962.48

739.86

958.09

996.77

986.29

1205.66

Subyek 3

1165.78

1363.87

1083.7

1303.87

1386.12

1401.76

1499.88

1312.6

Subyek 4

357.24

402.76

476.5

611.19

413.1

525.82

415.87

695.72

Subyek 5

142.79

287.49

150.07

360.59

397.56

301.17

192.45

294.74

Tabel IV. Pengelompokkan dan Rekapitulasi Gelombang Beta Daya Kanal yang dianalisis

Jenis Subyek

F3

F4

C3

C4

O1

O2

P3

P4

Subyek 1

88.21

144.37

151.4

181.78

121.72

164.04

164.04

129.87

Subyek 2

1066.58

1135.48

876.85

1158.25

1168.6

1370.67

1277.96

1481.3

Subyek 3

293.51

556.43

544.02

593.44

585.08

600.71

589.13

575.37

Subyek 4

281.43

384.86

391.87

458.89

525.66

512.25

477.16

702.29

Subyek 5

69.43

94.08

69.16

124.45

118.46

100.46

78.44

100.38

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

205

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Gambar 4. Spektrogram Kanal C4 subyek 1 Absolute Values of Ca,b Coefficients for a = 1 2 3 4 5 ... 61 57 53 49 45 41

scales a

37 33 29 25 21 17 13 9 5 1 50

100

150

200

250 300 time (or space) b

350

400

450

500

Gambar 5. Hasil Transformasi Wavelet Konitnu Kanal C4 subyek 1

Gambar 6. Spektrogram Kanal C4 Subyek 2

206

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Absolute Values of Ca,b Coefficients for a = 1 2 3 4 5 ... 61 57 53 49 45 41

scales a

37 33 29 25 21 17 13 9 5 1 50

100

150

200

250 300 time (or space) b

350

400

450

500

Gambar 7. Hasil Transformasi Wavelet Kontinu Kanal C4 Subyek 2

Gambar 8. Spektrogram Kanal P3 (a) dan Kanal P4 (b) Subyek 3

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

207

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Absolute Values of Ca,b Coefficients for a = 1 2 3 4 5 ... 61 57 53 49 45 41

scales a

37 33 29 25 21 17 13 9 5 1 50

100

150

200

250 300 time (or space) b

350

400

450

500

Gambar 9. Hasil Transformasi Wavelet Kontinu Kanal P3 Subyek 3

Gambar 10. Spektrogram Kanal O2 (a) dan Kanal P3 (b) subyek 4

208

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Absolute Values of Ca,b Coefficients for a = 1 2 3 4 5 ... 61 57 53 49 45 41

scales a

37 33 29 25 21 17 13 9 5 1 50

100

150

200

250 300 time (or space) b

350

400

450

500

Gambar 11. Hasil Transformasi Wavelet Kontinu Kanal O2 subyek 4

Gambar 12. Spektrogram Kanal O2 (a) dan Kanal P3 (b) Subyek 5

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

209

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Absolute Values of Ca,b Coefficients for a = 1 2 3 4 5 ... 61 57 53 49 45 41

scales a

37 33 29 25 21 17 13 9 5 1 50

100

150

200

250 300 time (or space) b

350

400

450

500

Gambar 13. Hasil Transformasi Wavelet Kontinu Kanal O2 Subyek 5

210

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Interference Mitigation for Self-Organized LTE-Femtocells Network Nancy Diaa El-Din, Karim G. Seddik, Ibrahim A. Ghaleb, and Essam A. Sourour Department of Electrical Engineering, Alexandria University Alexandria, 21544 Egypt E-mails: [email protected], [email protected], [email protected], [email protected] Abstract²femtocell is a low-cost, low-power base station installed by the end user for better indoor coverage. Since the number and locations of femtocells are unknown and inconsistent, the operator network cannot efficiently handle the interference control. Consequently, femtocells have to be plug-and-play devices that exhibit a high degree of self-organization for autonomous interference mitigation. In this paper, we propose a method to achieve this self-organization; where, each femtocell will use a unique identification sequence to declare its occupied frequency resources. Our method will allow the femtocells in the network to identify their active neighbor femtocells and their allocated frequency resources. This can be used to avoid interfering between neighboring femtocells and provides a mean for self-organization in the network. Keywords: femtocell; interference; self-organization.

I.

INTRODUCTION

The wireless capacity doubles every 30 months since 1950. Much of this has come from better coding, better modulation, and using more frequencies. But the most effective factor is using smaller cell sizes such as microcells [1]. This solution is effective. It shortens the distance between the transmitter and receiver leading to higher throughput and better chance for frequency reuse. On the other hand, this approach is expensive for operators as they have to install more sites, which unfortunately increases the maintenance cost. As a result, femtocells appear to be a more attractive solution for this problem. The femtocell [2] is defined as a lowcost low-power base station installed by the end user at his/her home for indoor coverage. The home Internet connection such as digital subscriber line (DSL) is used to connect femtocells to the operator network. Femtocells will be cost effective since they are paid and maintained by the end-users. The user, through using the femtocell, will enjoy good indoor coverage due to the proximity of the transmitter and receiver, higher throughput and battery saving. However, the implementation of femtocells in real world is faced by some challenges that threaten the expected benefits. Interference between the macrocell and femtocell networks, as well as between femtocells is the most critical problem facing the deployment of femtocells in the limited licensed spectrum. Interference could destroy the realization of practical femtocells and degrade their benefits. Since the number and location of femtocells are unknown, the operator using traditional network planning cannot efficiently handle interference control. Self-organization deployment is essential; Self-organization will allow femtocells to integrate themselves

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

into the network of the operator, learn about their environment (neighboring cells, interference) and tune their parameters (power, frequency) accordingly. One of the self-organization methods is the cooperation between autonomous femtocells to mitigate femto-to-femto interference. This paper focuses on the cooperation among the femtocells to reduce the interference between them. We present and analyze a method that enables a femtocell to immediately avoid using the occupied spectrum of nearby femtocells. Each femtocell has a unique identification sequence. In this paper an interference avoidance framework between nearby femtocells will be introduced using these unique femtocell identification sequences. The frequency combination of the identification sequence will be used to indicate the occupied resource blocks of the femtocell. there are particular non-overlapped RBs combinations available and known to femtocells' usage Each femtocell selects one of these known combinations of Resource Blocks (RBs) to transmit its identification sequence. Each combinations maps to a unique set of RBs that will be used by this femtocell in the next subframe. When a femtocell needs to transmit, it has to check the identification sequence combinations of other femtocells to deduce their occupied RBs. The advantage of this approach, instead of the demodulation/decoding process of all control channels for all neighbors used in LTE, is that the femtocell gets the information of the occupied RBs much faster (directly on the air). Therefore, the upper layers will have time to allocate for the next subframe. This method allows femtocell to adapt the selection of RBs and quickly inform other femtocells, instead of a fixed access for femtocells. Note that the identification sequence of a femtocell sent in the present subframe is mapped to the RBs that will be occupied by this femtocell in the next subframe. In this paper we study the performance of this system in several fading channels in term of probability of detection with 1, 2 or 4 receiving antennas. The rest of this paper is organized as follows. Section II shows the deployment of femtocells in the licensed spectrum bands. Section III illustrates The proposed self-organized technique to avoid femtocell interference, the system model, and the design of the femtocell identification sequence. Section IV provides the simulation results for evaluating the proposed approach. II.

FEMTOCELLS DEPLOYMENT

In this paper the femtocells network is based on the Long Term Evolution (LTE) [3] femtocells, also known as Home

211

ISSN: 2088-6578

Yogyakarta, 12 July 2012

eNodeB (HeNB) in 3GPP [4]. The LTE femtocell uses the Orthogonal Frequency Division Multiple Access (OFDMA) in the downlink as the users multiple access technique to achieve good system performance in the dispersive mobile radio channel. LTE uses the Resource Block (RB) as the smallest time frequency resource that can be allocated to a user (12 subcarriers contiguous in frequency and 7 OFDM symbols in time) [5]. For example, a channel bandwidth 10 MHZ corresponds to 50 RBs. The trend in LTE is the Frequency Reuse Factor (FRF) 1 for maximal resource use. However, if different users among adjacent cells use the same set of subcarriers, Co-Channel Interference (CCI) problem occurs especially for cell edge users. Appropriate inter-cell interference coordination technique should be applied to enhance the system capacity. In this paper to avoid FemtoMacro CCI the partial co-channel deployment is applied. It works on limiting the shared frequencies among the macrocell and femtocells. In this paper to mitigate interference with the LTE macrocell, part of the Fractional Frequency Reuse (FFR) idea in [6] is applied. The macrocell divides the total cellular licensed spectrum into three equal parts (A, B, C) as shown in Fig. 1. The macrocell itself is divided to three equal sectors. A macro-sector uses any of the three different parts of spectrum (A, B or C). The femtocells within that sector can use only the RBs in the remaining part of the spectrum. As a result, for femtocells in a certain sector only two third of the system bandwidth is available. Note that, macrocell still uses the whole available spectrum but with some coordination. To mitigate interference among femtocells in the same sector, femtocell base stations self-organized technique is proposed in the following section. III.

frequency combination in the control symbol as it indicates its occupied RBs in the next downlink subframe. The surrounding femtocells avoid using these RBs in the next subframe as they are detected to be occupied by the neighbor femtocell1. This paper is only interested in the detection of the selected frequency combination on which the femtocell transmits its identification sequence. From the frequency combination of the identification sequence the neighbors deduce the frequency bands that are used by the femtocell of the detected sequence.

Figure 1: The frequency allocation of macrocell and femtocells to avoid interference

FEMTOCELLL BASE STATIONS COOPERATION TECHNIQUE

In LTE control symbols could typically take up to 3 symbols. Femtocells are supposed to serve a small number of users (4 or 5 users) so it will not use the whole control symbols of LTE system. Femtocell base stations can use one of these control symbols to transmit the information they need for selforganization. Once a femtocell is connected and authenticated by the core network it downloads the ID numbers of neighbor femtocells. Each ID corresponds to a unique identification sequence that is used by a femtocell. In the proposed approach the neighbor femtocells self-organized such that they do not occupy the same RBs in the same downlink and uplink subframes. Each femtocell has to signal to other femtocells, directly on the air, the RBs that it occupies. Other femtocells will decline using these occupied RBs to avoid co-channel interference. The proposed method by which a femtocell can signal to other femtocells its occupied RBs is as follows: the femtocell sends its unique identification sequence in a certain frequency combination (certain RBs) in the control OFDM symbol. Each combination is mapped to a unique RBs allocation to be used by this femtocell. We assume that RB allocation is semi-persistent, i.e., it does not change for several sub-frames. As shown in Fig. 2, for example, femtocell1 sends its identification sequence (seq1) in the control OFDM symbol in RBs 0, 4, 8 and 12. This is mapped to allocating RB's (0, 1, 2 and 3) in the downlink subframe. Femtocell1 chooses this

212

CITEE 2012

Figure 2: The identification sequence combination mapping

Note that, each RBs assignment for a femtocell has a specific identification sequence combination in frequency domain, which mainly identifies this assignment. The resource blocks of the identification sequence will be uniformly distributed in frequency domain in a non-adjacent manner. This distributed combination is used to achieve some sort of frequency diversity in detecting the sequence. The sequence combinations are assumed to be non-overlapped.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

In the next subsection, we will present the system model as well as the femtocell identification sequence design approach. Later, we will consider the detection of the identification sequence. A. System Model Orthogonal Frequency Division Multiple (OFDM) has been utilized in cellular mobile system due to its low receiver complexity, and its robustness to ISI caused by frequency selective fading channel. Orthogonal Frequency Division Multiple Access (OFDMA) is similar to OFDM. However, in OFDMA instead of having one user occupying all subcarriers at any time, subcarriers are distributed to different users at the same time. As a result, multiple users can receive data simultaneously. The LTE standard adopts OFDMA as its radio access technique. The discrete time-domain, normalized signal of the control OFDM symbol where the identification sequence is transmitted from femtocell m can be written as

1 N

x m n

¦ X k e

j 2S kn N

m

,  Ncp d n d N

(1)

k S m

Where, X m is the identification sequence and S m is the set of subcarrier indices that the identification sequence of the m-th femtocell occupies. Also, Ncp denotes the length of cyclic prefix (CP). To avoid Inter Symbol Interference (ISI) caused by multipath propagation, Ncp must be longer than the channel impulse response. The time domain aggregate received signal is the superposition of signals from all femtocells that transmit simultaneously, each of which propagates through an independent multipath channel. The aggregate discrete-time received signal can be expressed as

y n

¦ y n w n ,

(2)

m

Where, w(n) denotes the Additive White Gaussian Noise (AWGN), Nu is the number of femtocells that can transmit simultaneously, and

y m n

At the receiver after removing the CP and applying FFT, the frequency domain signal for the kth subcarrier of femtocell m is given by

Y m k

X m k H m k W k ,

L

¦D l x n  W m

m

(3)

l ,m

l 1

is the received signal from the m-th femtocell; this received signal passes through a multipath fading channel with L paths

l denotes the complex amplitude of the l-th path for femtocell m. D m l is modeled as zero-mean complex

and D m

Gaussian random 2º ª E « D l » V 2 l . m ¬ ¼

variable The

channel

with

variance

amplitudes

are

L

normalized such that the channel variance

¦V l 2

l 1

W l ,m is the discrete delay of the lth path for m femtocell.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

1 .

(4)

where Hm(k) is a complex Gaussian fading channel gain at the k'th subcarrier of femtocell m'th signal, and W(k) is the AWGN at subcarrier k with variance No 2 per dimension. B. Designing the femtocell identification Sequence For designing the femtocell identification a [+1, í1] sequence is generated using the Secondary Synchronization Signal (SSS) used in the LTE system [3]. The SSS sequences are based on maximal length sequences, known as Msequences. they can be created by cycling through every possible state of a shift register of length n which results in a sequence of length Qí. Each SSS sequence is constructed by interleaving of two B-PSK modulated sequences in the frequency domain. The advantage of SSS sequence is that these sequences for the femtocell identifications can be selected to have zero/negative cross-correlation after the differential correlation detection used for detecting the distinct sequences. The m'th femtocell identification sequence Xm ( k ) is a [+1, í1] unique sequence generated for the m'th femtocell. This sequence is transmitted, in the control symbol, over the subcarriers of certain frequency combination. The frequency combination points to the RBs that will be occupied by the m'th femtocell in the next subframe. In the frequency domain, the received signal domain can be expressed as:

Nu

m 1

ISSN: 2088-6578

Y k

Nu

¦ H k X k I k W k m

m

(5)

i

m 1

The indicator Ii (k) is one if k  Si otherwise zero Si is the set of subcarriers in the ith frequency combination that is being used by the m'th femtocell to transmit its identification sequence. W (k) is the AWGN in the frequency domain. The identification sequence and its frequency combination are detected in the frequency domain, through using differential correlation detection with the stored identification sequences of all neighbors. We consider the use of differential correlation at the receiving femtocell to avoid the need for channel estimation (since no pilots are used from the femtocells in the control region). Differential correlation eliminates the need for channel estimation and hence simplifies the identification of the active femtocells. The differential correlation expressed in (6) can be used as we have stored information about the received signal, which are the neighbors' identification sequences. The differential correlation can be applied as follows:

Fj

¦ Y k Y k  1 X *

k S

m, j

k X m* , j k

 1

(6)

 j

213

ISSN: 2088-6578

Yogyakarta, 12 July 2012

TABLE 1: SIMULATION PARAMETERS

Equation (6) represents the differential correlation of the received signal Y k with one of the stored sequences (the m'th femtocell sequence) allocated on one of the available frequency combinations (j combination). X m , j k is the m-th femtocell identification sequence mapped on the j-th frequency combination. k is the subcarrier index, (k+1) is the index of next subcarrier and Y (k) is as defined in (4). In a femtocell system, the channel delay spread is small. As a result, H (k)'s are highly correlated for the same femtocell (same m) at adjacent subcarriers (k and k+1). Consequently, differential correlation detector uses this in eliminating the effect of the channel fading coefficients through multiplying Y k byY * k  1 .

Fj

tends to have a maximum value when the m'th femtocell

stored identification sequence matches with the received identification sequence in the intended frequency combination (j). S j is a subset from S j excluding the right most index. The

CITEE 2012

parameters

Values

System Bandwidth (MHz)

20, 10 , 3

Total number of Resource blocks (according to BW)

100, 50, 15

Number of subcarriers per resource block

12

Subcarrier-spacing (kHz)

15

FFT size (for all BWs)

2048

CP size

144

Sampling frequency

30.72MHz

Femtocell identification sequence assigned resource blocks

4

Number of receiving antennas (diversity gain)

1, 2, 4

Es/No is the signal to noise ratio between the transmitting femtocell and the detecting femtocell. This figure shows the effect of the number of receiving antennas with 1 and 3 simultaneous transmitting femtocells. To enhance the identification sequence probability of detection, space diversity is applied at the detecting femtocell by employing more than one receiving antenna. As shown in figure, the use of space diversity enhances the performance significantly. As the results show using one antenna in LTE femtocell is not enough. However, the LTE conformance requires the number of receiving antenna to be 2 or more. As the number of receiving antennas increase, the probability of detection increases. The femtocell identification (FID) sequence and its frequency Therefore, high diversity gain leads to higher immunity to fading channel and increases the ability of detection. On the combination i can be detected as follows: other hand, the detection probability is slightly worse in case of ^ (7) FIDi arg max real ( F j ) increasing the number of simultaneous transmitting femtocells j as in case of 3 simultaneous transmitting neighbors. ^ ( FIDi ) is the estimated femtocell identification sequence and In Fig. 4 the effect of changing the available system its ith frequency combination. In case of having Nr receiving bandwidth (20, 10, and 3MHz) is illustrated in case of 2 antennas equations (5) and (6) will be repeated for every simultaneous transmitting neighbors and 4 receiving antennas. receiving antenna. The receiving antennas should be separate in As the available bandwidth decreases, the femtocells available space. The number of independent paths between the RBs reduced. As a result, higher detection error probability transmitter and receiver will be Nr. As a result, Fj in (7) is results due to decreasing the frequency diversity needed to replaced by (F1, j+ F2, j«+ FNr, j). guarantee the variability of channel conditions on the RBs of the identification sequence. IV. SIMULATION RESULTS In Fig. 5 the performance of proposed approach is evaluated in different indoor environments: LTE residential In this section the performance of the proposed selfchannel model [5], WINNER II indoor NLOS channel model organized femtocells is evaluated in fading channel through [7], IEE 802.11n indoor WLAN channel model [8] and IEEE simulation. We consider using spatial diversity with different 802.16m indoor hotspot channel model [9]. The numbers of number of receiving antennas. The effects of different system simultaneous transmitting femtocells are 3 and used receiving bandwidths and various channel models are studied. Table 1 antennas are 2. The system BW is 20MHz. As it can be seen summarizes the simulation parameters. The simulation results the proposed technique performs well in the different show the error probability in detecting the neighbor Femtocell environments. However, the channel model characterized by Identification Sequence (FID) transmitted. Here an error occurs high r.m.s delay spread gives the best performance, where it if we detect any sequence in the wrong RBs combination. This provides high channel selectivity. cause the femtocell to make an error in deciding the neighbor femtocells occupied RBs in uplink/ downlink subframes. need for channel estimation is excluded since we employ the assumption that the channel attenuations on adjacent subcarriers will be the same. Differential correlation eliminates the need for channel estimation. To estimate the transmitting femtocell identification sequence and its frequency combination using differential detection in (6), the receiving femtocell has to correlate the received sequence with all of the stored identification sequences through all the possible frequency combinations. Hence, as the numbers of neighbor femtocells increase the complexity of the system increases.

Fig. 3 shows the performance of the proposed technique in case of LTE residential fading channels [5], at very low signal to noise ratio Es/No.

214

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

0

Yogyakarta, 12 July 2012

Detection error probability of nearby femtocell in fading channel

ISSN: 2088-6578

Detection error probability of nearby femtocells in fading channel

-1

10

10

LTE residential ch. model Winner indoor ch. model WLAN indoor ch. model hotSpot IEEE802.16m indoor

10

FID Detection Error Probability

FID Detection Error Probability

1 Rx Antenna -1

-2

10

2 Rx Antennas -3

10

-4

10

-5

10

-3

4 Rx Antennas

3 Tx users 1 Tx user 1 Tx user 3 Tx users 3 Tx users 1 Tx users

-2.5

-2

-1.5 Es/No in dB

-1

-0.5

-3

10

-4

10

0

Figure 3: the performance of the proposed approach in fading channel using space diversity.

-2

10

0

2

4 6 Es/No in dB

FID Detection Error Probability

-3

10

-4

10

-5

10

-2

0

2 4 Es/No in dB

6

CONCLUSION

Femtocell is an attractive solution to overcome the problems of highly needed wireless capacity and bad indoor coverage. Interference between femtocells can destroy the practical implementation of this solution in real world. In the proposed approach the neighbor femtocells cooperate that they have to declare to each other directly on the air the frequency band that they occupy to prevent nearby femtocell to use their occupied RBs. The way they use to signal these messages is their unique identification sequence and the RBs combination of this sequence in the frequency domain. The performance of this approach is studied in different indoor environments and different system bandwidths. The effect of applying spatial diversity with different number of receiving antennas is illustrated.

8

Figure 4: the performance of the proposed approach in different BW

REFERENCES [1] [2]

[3]

[4]

[5] [6]

[7] [8] [9]

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

10

Figure 5: the performance of the proposed approach in different channel model

V. Detection error probability of nearby femtocell in fading channel -2 10 BW 20 MHz BW 10 MHz BW 3 MHz

8

³the Best That LTE Can Be", Femto Forum white paper , May 2010, availablble from www.femtoforum.org. S. R. Saunder, S. Carlaw, A. Giustina, R. R. Bhat, V. S. Rao and R. Sieberg "Femtocells: Opportunities and Challenges for Business and Technology" John Willey and Sons Ltd ISBN:978-0-470-74816-9, 2009. S. Sesia, I. Toufik and M. Baker "LTE ± The UMTS Long Term Evolution: From Theory to Practice " 2009 John Wiley & Sons, Ltd. ISBN: 978-0-470-69716-0. *33µ*+RPH1RGH%6WXG\,WHP7HFKQLFDO5HSRUW¶UG*HQHUDWLRQ Partnership Project ± Technical Specification Group Radio Access Networks, Valbonne (France), Technical Report 8.2.0, Sep. 2008. 3GPP TS 36.104 v8.1.0, User Equipment (UE) radio transmission and reception", 2008-03. M. Z. Chowdhury, Y. M. Jang and Z. J. Haas "Interference Mitigation Using Dynamic Frequency Re-use for Dense Femtocell Network Architectures" Ubiquitous and Future Networks (ICUFN),Second International Conference, 2010. ]IST-WINNER II Deliverable D1.1.1 v1.0 " WINNER II Interim Channel Models" ,December 2006. IEEE P802.11 Wireless LANs, "TGn Channel Models", IEEE 802.1103/940r4, 2004-05-10. IEEE 802.16m Evaluation Methodology Document (EMD), IEEE 802.16m-08/004r

215

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

DUAL-BAND ANTENNA NOTCHED CHARACTERISTIC With CO-PLANAR WAVEGUIDE FED Rastanto Hadinegoro, Yuli Kurnia Ningsih and Henry Chandra

Department of Electrical Engineering, Faculty of Industrial Technology, Universitas Trisakti Jl. Kyai Tapa No.1 Grogol, Jakarta 11440, Indonesia E-mail : [email protected], [email protected] and [email protected] Abstract – In this paper, a dual band antenna microstrip is proposed. Antenna is printed on FR4 substrate xy). This antenna can work in GSM and 4G-LTE of 1.8GHz and 2.1GHz frequencies. To achieve dual band frequency operation, the proposed antenna has notch structure with Co-planar Wwaveguide (CPW). From the simulation result, impedance bandwidth are 159 MHz and 164MHz for VSWR < 2.

Keywords : dual-band, CPW, notch structure

I. INTRODUCTION In recent years, 3G technology has been applied and reached many significant achievements, in other hand, 4G mobile communication is being standardized and received much attention in ICT. Therefore, designing new compact antennas with good characteristics, easy to manufacture and low cost applied for 3G and 4G mobile terminals is quite necessary. With many advantages, such as light weight, low fabrication costs, planar configuration, and capability to integrate with microwave integrated circuits, the Microstrip antenna (MSA) is the one of the most commonly used antenna types in 3G or 4G application. Recently, one antenna can work in multiband frequency. The multi band antenna is more efficient because it can decrease drop call and make larger coverage communication area [1-5]. In recent years, achieving multi-frequency operation using microstrip antennas have been reported. Multilayered aperture coupled microstrip antennas have been proposed in [6], [7]. A dual-feed triple bands microstrip antenna for GSM and GPS operation is reported in [8]. A triplelayer patch antenna capable of triple-frequency operation is proposed in [9].However; the disadvantages of these designs are large dimension and bulk. MSA with multi frequency operation has been achieved by several techniques: orthogonal-mode, multi-patch antenna and reactively-loaded. One of the most widely known multi-band frequency in MSA is reactively loaded. Indeed, it seems to be the most attractive option due to its design simplicity. In this paper, a dual-band antenna with a reactively loaded by notch technique is proposed. Notch technique is obtained by a little damage on the patch antenna to achieve the expected frequencies. Moreover, it is more simple design using double layer

216

substrate. This will lead in cost reduction and in design simplification. The optimum dual-band antenna characteristics are determined through combine the notch structure and Co-planar Waveguide (CPW). II. ANTENNA DESIGN The proposed antenna to be designed for Mobile Communication (GSM), Enhanced Data rates for GSM Evolution (EDGE), Wideband Code-division Multiple Access (W-CDMA), and High Speed Packet Access (HSPA). The resonator frequencies of this antenna are 1.8GHz and 2.1GHz. The antenna use CPW technology. A CPW-fed antenna not only performs better with respect to bandwidth and radiation pattern, but also is easily manufactured, which has increased its importance [10][11]. They also allow easy mounting and integration with other micro-wave integrated circuit and RF frequency devices. The geometry and parameters of the notch antenna with CPW is shown in Fig. 1. The antenna is etched on FR4 substrate and having a relative permittivity (εr) = 4.3 substrate of thickness (h) = 1.6 mm and loss tangent (tan δ) = 0.0265. The geometry and configuration of antenna for dual-band operation frequency can be expressed following [1]. The patch antenna is calculated by equation (1), (2), (3) and (4) for design a microstrip antenna shaped triangular. fr 

ck mn

2  r



2c 3a  r

m 2  mn  n 2

(1)

For mode TM10 frequency resonance by the following equation: f 10 

2c 3a e  r

(2)

where ae is: 2 2  h h h 1  h   h ae  a1  2.199  12.853  16.436  6.182   9.802   (3) a ar a r r  a   a 

and a is substituted : a

2c 3 fr  r

(4)

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

The size of the antenna is W x L = 60 mm x 60 mm. The antenna is excited by a 50Ώ microstrip line. The width of the 50Ώ microstrip line is 6.7 mm, and the gap of the CPW line is g = 0.5 mm and dimension of ground plane is (g) = 6.5 mm.

ISSN: 2088-6578

simulation is dual-band frequency by assigning notch pair triangle patch antenna. Table 2 and Table 3 show of notch dimension. Fig. 3 shows the simulation results of return loss when a port was fed. The insertion loss is -20.17 dB at 1.8GHz and -21.71dB at 2.1GHz. Table 1. Iteration width x Iteration

ax

1

a x2

1st 2nd 3rd 4th

x2

y4

x4 y2 y3

y1 x1

x3

w

y1 (mm) 5 5 10 15

1st 2nd 3rd 4th

g

Figure 1 Geometry and dimension of the proposed antenna

x5 (mm) 20 18 17 16

1.8 GHz -11.16 dB

2.1 GHz -8.41 dB

-10

1.8 GHz -16 dB

2.1 GHz -12.46 dB

-15

1.8 GHz -19.07 dB

-20

1.8 GHz -20.17 dB

-5 Return Loss (dB)

The design started with determines a single frequency antenna at 1.8 GHz. Simulation tools have been used to calculate S parameters, based on the Method of Moments The final optimization, the reflection factor vs frequency is depicted in Fig 2. The reflection coefficient is achieved below -26.91 dB at 1.8 GHz.

-25 1.7

0

Parameter y2 y3 (mm) (mm) 5 5 5 5 10 10 15 15

y4 (mm) 10 8 6 3

0

III. SIMULATION RESULT

1.9

2.1 GHz -19.49 dB 2.1 GHz -21.71 dB 2.1

2.3

2.5

2.7

Frequency (GHz)

-5 Return Loss (dB)

Parameter x3 x4 (mm) (mm) 14 3 13 2 11 2 9 1

x5 Iteration

g

x2 (mm) 1 2 4 6

Table 2. Iteration width y

g

1.8 GHz -11.17 dB

-10

Figure 3. Reflection loss of dual-band antenna

1.8 GHz -15.01 dB

-15 -20

1.8 GHz -17.8 dB

-25

1.8 GHz -26.91 dB

-30 1.7

x1 (mm) 21 20 18 16

1.9

2.1

Table 3. Iteration result of dual band antenna Frequency Iteration 2.3

2.5

2.7

Frequency (GHz)

Figure 2 Iteration Single Frequency Result After a single frequency that is expected the best condition return loss and VSWR, the next step

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

1st 2nd 3rd 4th

1.8GHz Return Loss (dB) -11.16 -16 -19.07 -20.17

2.1GHz Return Loss (dB) -8.41 -12.46 -19.49 -21.71

217

ISSN: 2088-6578

Yogyakarta, 12 July 2012

Fig. 4 shows the simulated result frequency response of the novel dual-band antenna. For a return loss better than 10dB, it has a bandwidth of about 159MHz (1.726GHz – 1.885GHz) at 1.8GHz and about 164MHz (2.019GHz – 2.183GHz) at 2.1GHz. 0

Return Loss (dB)

-5

1.726 GHz -10 dB

-10

1.885 GHz -10 dB

-15 -20 -25 1.7

2.019 GHz -10 dB

IV. REALIZATION AND RADIATION PATTERN RESULT Figure 6 shows the fabrication of dual-band notch characteristic by CPW fed. The antenna can produce two radiation patterns that can be seen on Figure 7 and Figure 8. Figure 7 shows radiation patterrn at frequency 1.8GHz, Half Power Beam Width (HPBW) is 100o and Figure 8 shows radiation pattern at frequency 2.1GHz, HPBW is 100o.

2.163 GHz -10 dB 2.1 GHz -21.71 dB

1.8 GHz -20.17 dB 1.9

CITEE 2012

2.1

2.3

2.5

2.7

Frequency (GHz)

Figure 4 Bandwidth Result By simulation of current distribution, effect the notch pair patch can be known. Figure 5 shows current distribution. With addition of a pair of notch can lead to a direction current field, so it can be raised a lower frequency fr1 (lower resistance frequency) of the frequency of resistance (fr2). Lower frequency of resistance appearance (fr1) caused by to the current distribution becomes asymmetrical triangle. It is seen from the notch that was cut so that the current line resistance ranges of the antenna changing. The area around the notch has a distribution of the maximum current that can generate a frequency lower resistance (fr1). Then by using the location of the feed position is obtained from the characterization tests, and then both the frequency of resistance can be obtained with good impedance equivalence.

Figure 6 Realization of Dual Band Antenna

Figure 7 Radiation Pattern Frequency 1.8GHz

Figure 5 Current Distribution Figure 8 Radiation Pattern Frequency 2.1GHz

218

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

V.

ISSN: 2088-6578

CONCLUSION

A novel configuration of dual band frequency notch patch antenna microstrip by CPW fed has been studied, simulated and fabricated. The results of this simulation antenna microstrip at 1.8GHz shows return loss is -20.17dB and VSWR is equal to 1.218 with impedance bandwidth about 159MHz) and at 2.1GHz shows return loss is -21.71dB and VSWR is equal to 1.179 with impedance bandwidth about 164MHz. The result of measured HPBW is 100o at both of determined frequency. REFERENCES [1]

Amit A. Deshmukh, K. P. Ray. “Resonant Length Formulations for Dual Band Slot Cut Equilateral Triangular Microstrip Antennas”. Wireless Engineering and Technology, 2010, 1, page 55-63. [2] D. Zhou, S. Gao, F. Zhu, R. A. Abd-Alhameed, and J. D. Xu. “A Simple and Compact Planar Ultra Wide-Band Antenna with Single or Dual Band Notched Characteristics”. Progress in Electromagnetics Research, Vol. 123, 47{65, 2012. [3] O.Tze-Meng and T. K. Geok, A. W. Reza, “A Dual-Band Omni-Directional Microstrip Antenna”, Progress In Electromagnetics Research, PIER Vol. 106, 363–376, 2010. [4] Y.-C. Lee and J.-S. Sun, “Compact Printed Slot Antennas for Wireless/ Dual-and Multi-Band Operations”. Progress in Electromagnetics Research, PIER 88, 289–305, 2008. [5] A. Danideh, A.A.Lotfi and Neyestanak, “CPW Fed Double TShaped Array Antenna with Suppressed Mutual Coupling” Int. J. Communications, Network and System Sciences, 2010, 3, 190-195 [6] F. Croq and D.M. Pozar, "Multi-frequency Operation of Microstrip Antennas Using Aperture Coupled Parallel Resonators", IEEE Trans. Ant. and Propag., vol.40, no.11, ppl367-1374, November 1992. [7] X.H. Yang and L. Shafai, "Muitifrequency Operation Technique for Aperture Coupled Microstrip Antennas", APSymp., pp 198-1201, 1994. [8] Min Sze Yap, Lenna Ng and Sheel Aditya, "A Triple Band Antenna for GSM and GPS Aplication", IEEE. ICICS-PCM, vol.2, ppl1 19-1123, December 2003. [9] Sho Yuminaga and Yoshihide Yamada, "A Triple-layer Patch Antenna Capable of Triple-frequency Operation", IEEE Antennas and Propagation Society, AP-S International Symposium (Digest), vol.4, ppl38-141, June 2003. [10] R. Garg, P. Bhartia, I. Bahl and A. Ittipiboon, “Microstrip Antenna Design Handbook,” Artech House, Norwood, 2001. [11] R. N. Simons, Coplanar Waveguide Circuits, Components, and Systems, JohnWiley & Sons, New York, NY, USA, 2001.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

219

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Edge – Component Order Connectivity Issue In Designing MIMO Antennas Antonius Suhartomo Electrical Engineering Study Program of Engineering Faculty, President University Jl. Ki Hajar Dewantoro, Kota Jababeka, Cikarang Baru, Bekasi 17550 – Indonesia

[email protected]

Abstract—A vulnerability parameter measures the resistance of a network to disruption of operation after the failure of certain stations or communication links. In this paper, we consider that the nodes are reliable and links can fail. The traditional parameter used as a measure of vulnerability of a network modeled by a graph with perfect nodes and edges that may fail is the edge connectivity λ . Of course, network modeled by bipartite graph

(

)

(

of multiple antennas at both the transmitter and receiver to improve communication performance [6]. Physical configuration is shown in figure 1.1, and it can be represented by complete bipartite graph that can be seen in figure 1.2.

)

G K p , q , is λ K p , q = p where 1 ≤ p ≤ q. In this case; failure of the network simply means that the surviving subgraph becomes disconnected upon the failure of individual edges. If, instead, failure of the network is defined to mean that the surviving subgraph has no component of order greater than or equal to some preassigned number k, then the associated vulnerability (k ) parameter, the k-component order edge connectivity λ c ,

Figure 1.1 MIMO channel model

is the minimum number of edges required to fail so that the surviving subgraph is in a failure state. In measuring a measurable network of designing MIMO antennas, we (k ) determine the value of λ c K p ,q for arbitrary 1 ≤ p ≤ q and 3 ≤ k ≤ p + q. As it happens, the situation is relatively

(

simple when p ≤ p>

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦ ,

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦

)

and more involved when

where p and q represent the MIMO

antennas. Keywords-complete bipartite graph, edge connectivity, kcomponent order edge-connectivity, k-component order edgefailure states, k-component order edge failure set, multiple input and multiple output (MIMO).

I.

THE MODEL

In radio, multiple-input and multiple-output, or MIMO technology has attracted attention in wireless communications, because it offers significant increases in data throughput and link range without increasing transmit power. The connection of the distance transmitter and receiver antennas can be represented by graph G(V,E) more specifically in MIMO antennas design as complete bipartite graph G (K p , q ) in which the antennas are represented by the nodes of the graph and the links that are connected from transmitter and receiver antennas are represented by the edges [4]. One of the key issues in designing performance is vulnerability. MIMO is the use

220

(

)

Figure 1.2 Complete bipartite graph G K p,q . Upon the MIMO links between transmitters and receivers at a distance location represents by complete by bipartite. In the vulnerability measure it is assumed that nodes are perfectly reliable but edges may fail. As traditional vulnerability measure, when a set F of edges fail we refer to F as an edge-failure set and the surviving subgraph G – F as an edge-failure state if G – F is disconnected. Definition 1.1 [1,2,3]: The edge-connectivity of G, denoted by λ (G ) or simply λ , is defined to be

{

}

λ (G ) = min F : F ⊆ E , F is an edge - failure set .

For example, consider the complete bipartite graph K p ,q , with 1 ≤ p ≤ q. We will refer to the two

maximal sets of independent nodes as the parts of the K p,q . It is easily seen that λ K p,q = p , i.e., the

(

)

smaller part order. See Figure 1. 3.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

(k ) Formulas for λ c (G ) have been found for specific classes of graphs [1,2,5]. For example, (k )

λc

(Pn ) = ⎢⎢ n − 1 ⎥⎥, ⎣ k − 1⎦

(

Figure 1.3 A bipartite graph K p ,q Then, in the new model when a set F of edges fail we refer to F as a k-component order edge-failure set and the surviving subgraph G – F as a k-component order edge-failure state if G – F contains no component of order at least k, where k is a predetermined threshold value.

where Pn is the path of order n,

)

(k ) and λc K1,n−1 = n − k + 1. An algorithm for finding (k ) λ c of an arbitrary tree can be found in [5]. No formula (k ) or algorithm for the computation of λ c (G ) of an arbitrary graph G has yet to be found. In this work we consider complete bipartite graphs K p,q . Our modus operandi is to find a maximum k-component order edgefailure state of K p ,q , subtracting the number of edges in such a graph from pq would then yield the value of (k ) λc K p ,q = pq − e(G − F ).

(

)

Definition 1.2 [1,2,3]: Let 2 ≤ k ≤ n be a predetermined threshold value. The k-component order (k ) edge-connectivity of G, denoted by λ c (G ) or simply (k ) λc , is defined by

We now consider the MIMO antennas as complete bipartite graphs K p ,q , with 1 ≤ p ≤ q. It is clear that if

λ(ck ) (G ) = min{F : F ⊆ E , F is a k − comp edge - failure set}

for K p,q then all components of

We refer to the set of edges F as a minimum kcomponent order edge-failure set and to the resulting graph G – F as a maximum k-component order edge(k ) failure state. Note that, for any graph G, λ c (G ) = e(G ) since every 2-component order edge failure state must consist of isolated nodes and therefore is edgeless. Thus for the minimum link that transmitter and receiver antennas connected we will assume that the threshold value k is at least 3. [5]

II.

PRELIMINARY RESULTS

F ⊆ E is a minimum k-component order edge-failure set the maximum k-

component order edge-failure state H = K p,q − F are complete bipartite graphs or isolated nodes. If K a ,b is such a component, the nodes in the part of order a come from the part of order p of the K p ,q and the nodes in the part of order b come from the part of order q of the K p ,q . Also it is possible that for the component K a,b , a > b. Finally we use the notation K

or K a,0 0, b to denote a or b isolated nodes, from the appropriate part of the K p ,q . We first establish some lemmas, from which we find the possible forms of a maximum k-component order edge-failure state.

Lemma

2.1:

Suppose

1≤ p ≤ q

and

4 ≤ k ≤ p + q. There exists a maximum k-component Figure 1.4 A maximum 4 – component order edge – failure state for K 3,5

order edge-failure state H of K p ,q such that there is at most one nontrivial component of H with fewer than k – 1 nodes.

Maximum k-component order edge-failure states may not be unique. In fact there exist one other non-isomorphic maximum 4-component order edge-failure states for K 3,5

Proof: Let H be a maximum k-component order edgefailure state and suppose that H has two nontrivial components C1 = K a ,b and C 2 = K c,d of order less

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

than k – 1. Assume

2 ≤ a + b ≤ c + d ≤ k − 2. If a ≤ c

221

ISSN: 2088-6578

Yogyakarta, 12 July 2012

/ replace the components C1 and C 2 with C = K 1 a, b − 1 and C

/ =K . respectively, obtained by moving 2 c, d + 1

one node from the b part of C1 to the d part of C 2 ; this can always be done since C1 is nontrivial and thus b ≥ 1.

(

)

/ Note n K c,d +1 ≤ k − 1. Let H be the resulting kcomponent order edge-failure state. Then / e H − e(H ) = c − a > 0. If a > c , then it follows that

( )

b < d . In this case replace the components C1 and C 2 / / with C = K and C = K , respectively, 2 1 a − 1, b c + 1, d / and let H be the resulting k-component order edge/ failure state. Then e H − e(H ) = d − b > 0. If the number of edges increases then it contradicts the assumption that H was maximum. If the number of edges / remained the same, then replace H with H and repeat the process if H has two components of order at most k – 1.

( )



From Lemma 2.1 we will assume that a maximum kcomponent order edge-failure state H has at most one nontrivial component of order less than k – 1. Lemma 2.2: If K p ,k −1− p and K are p ,k −1− p 1 2 2 1 non components in a maximum k-component order edge-

failure state of K p ,q , then Proof: Suppose, without p 2 − p1 ≥ 2. Consider the

p 2 − p1 ≤ 1. loss of generality, that k-component order edge-

failure state obtained by replacing K and K with p ,k −1− p p ,k −1− p 1 1 2 2 K and K . The difference p + 1, k − 2 − p p ,k −1− p 1 1 2 2 in the edge count of the latter minus the former is 2 p − p − 1 ≥ 2, which contradicts the hypothesis 2 1 that the original state was a maximum k-component order edge-failure state. Thus the result follows. ■

(

)

Lemma 2.2 implies that each component of order k – 1 of a maximum k-order edge-failure state is of the form . K β ,k −1− β or K β + 1, k − 2 − β

The next result is easily established and is thus presented without proof.

CITEE 2012

Lemma 2.3: If a maximum k-component order edgefailure state contains an isolated node then all nontrivial components have order k – 1. Moreover, all isolated nodes come from the same part of the K p ,q .

As a consequence of the previous lemmas we can assume that the components of a maximum k-component order edge-failure state must be of one of the following three types. Type 1: All components are of order k – 1, each either of the form K β ,k −1− β or K . β + 1, k − 2 − β Type 2: All components except for one are of order k – 1, each either of the form K β ,k −1− β or

K

β + 1, k − 2 − β

. The other component is complete

bipartite of order at most k – 2. Type 3: All nontrivial components are of order k – 1, each either of the form K β ,k −1− β or

K

β + 1, k − 2 − β

, along with at least one isolated node.

III.

SPECIAL CASE

We now consider two cases for finding a maximum kcomponent order edge-failure state of K p,q . The first

⎢ p + q⎥ case p ≤ ⎢⎣ k − 1 ⎥⎦ is simpler and is covered in the next

⎢ p + q⎥ section. The second case p > ⎢⎣ k − 1 ⎥⎦ is more extensive and will be covered in the following session [7]. ⎢ p + q⎥ A. Case 1: p ≤ ⎢⎣ k − 1 ⎥⎦ Let

p,

q,

and

k

satisfy

1≤ p ≤ q

and

⎢ p + q⎥ 3 ≤ k ≤ p + q. Moreover, assume that p ≤ ⎢ . ⎣ k − 1 ⎥⎦ Theorem 3.1 [7]: Suppose p, q, and k satisfy 1 ≤ p ≤ q

and 3 ≤ k ≤ p + q. Let H be a maximum

k-

component order edge-failure state of K p,q , where

⎢ p + q⎥ p≤⎢ . Then H consists of p copies of K1,k −2 ⎣ k − 1 ⎥⎦ and q – p(k – 2) isolated nodes. Proof:

Since

⎢ p + q⎥ p≤⎢ ⎣ k − 1 ⎥⎦

it

follows

that

p(k −1) ≤ p + q or p(k − 2) ≤ q. Observe that if t is

222

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

the number of complete bipartite components of H then t ≤ p. If t = p each complete bipartite component must be of the specified form and the result follows. We show that t ≤ p − 1 cannot occur. Suppose t ≤ p − 1; then

q l , the number of nodes in the q-part not in a complete bipartite component, i.e. isolated nodes, satisfies

q l ≥ q − t (k − 2) ≥ q − ( p − 1)(k − 2) = q − p (k − 2 ) + (k − 2 ) ≥ k − 2. .

Thus we obtain the contradictory result that there are p nontrivial components. ■

⎢ p + q⎥ Thus we see that when p ≤ ⎢⎣ k − 1 ⎥⎦ the maximum k-component order edge-failure state must be either all components are of order k – 1, or all nontrivial components are of order k – 1 and there exists at least one isolated node. The former occurs when

ISSN: 2088-6578

component of order k – 1 is of the form K β ,k −1− β or K

β + 1, k − 2 − β Let

p,

and



t – s copies of K



possibly

β + 1, k − 2 − β

one

copy

of

K p ,q l l

where

q l ≤ k − 3( pl ≤ k − 3) and K p ,q

l l

p ≤ q and 3 ≤ k ≤ p + q , then the 4 – tuple (β , s, pl , ql ) is

⎢ p + q⎥ then 1 ≤ p ≤ q and 3 ≤ k ≤ p + q. If p ≤ ⎢ ⎣ k − 1 ⎥⎦ (k ) ⎞⎟ = p(q − k + 2 ). λ c ⎛⎜ K ⎝ p, q ⎠ ⎢ p + q⎥ B. Case 2: p > ⎢⎣ k − 1 ⎥⎦

realizable provided the following conditions hold:

i. ii. iii. iv.

1≤ β ≤ k −2 ⎢ p + q⎥ 1≤ s ≤ t = ⎢ ⎣ k − 1 ⎥⎦ pl , ql ≥ 0 p = sβ + (t − s )(β + 1) + pl ,

q = s(k − 1 − β ) + (t − s )(k − 2 − β ) + q l .

Let p, q, and k satisfy 1 ≤ p ≤ q and 3 ≤ k ≤ p + q It now remains to consider the case when ⎢ p+q⎥ . Let H be a maximum k-component p>⎢ ⎣ k − 1 ⎥⎦ order edge-failure state of K p,q . If H is of Type 1 then all components are of order k – 1 and there are exactly

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦ such components. If H is of Type 2 then there is exactly one component of order less than k – 1; thus

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦ components of order k – 1.

Finally, if H is of Type 3 then all nontrivial components are of order k – 1. Since there is at least 1 but at most k – 3 isolated nodes, once again there are

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

1≤ p ≤ q

Definition 3.1 [7]: Consider to a given 1 ≤

⎢⎣ k − 1 ⎥⎦.

components of order k – 1.

satisfy

consists of isolated nodes.



Corollary 3.1 [7]: Suppose p, q, and k satisfy

there must be

k

p l + ql ≤ k − 2. Note if p l = 0(ql = 0 ),

(k ) ⎞⎟ in We can now give the formula for λ c ⎛⎜ K p, q the case that p ≤

and

order edge-failure state of K p ,q having the following form: • s copies of K β ,k −1− β ,

then



q,

⎢ p + q⎥ and let s satisfy 3 ≤ k ≤ p + q. Set t = ⎢ ⎣ k − 1 ⎥⎦ 1 ≤ s ≤ t. Then there exists a maximum k-component

q = p ( k − 2) .

⎢ p + q⎥

.

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦

. Conditions i. through iv. ensure that p l + ql ≤ k − 2. Hence the graph sK β ,k −1− β ∪ (t − s )K β +1,k −2− β ∪ K p ,q is a kl l component order edge-failure state of K p ,q . We will

((

))

also use the 4-tuple notation to denote this associated failure state and e β , s , p , q to denote the number of l l edges. Definition 3.2 [7]: A realizable 4 – tuple

(β , s, pl , ql )

is potentially optimal if it also satisfies: v. if either p l = 0 or ql = 0 then

pl + q l ≤ k − 3

Furthermore, each

223

ISSN: 2088-6578

vi.

Yogyakarta, 12 July 2012

p l ≤ β ; if s = t then q l ≤ k − 1 − β ,

Lemma 3.1 [7]: Let

)

Definition 3.3 [7]: A potentially optimal 4 – tuple β , s , p , q is optimal if the associated kl l component order edge-failure state is a maximum kcomponent order edge-failure state of K p ,q .

Given 1 ≤ p ≤ q and 3 ≤ k ≤ p + q, we want

(

)

to find an optimal 4 - tuple β , s , p , q . It is evident l l is optimal, then that, if β , s, p , q l l (k ) ⎞⎟ = e⎛⎜ K ⎞⎟ − e β , s, p , q . Thus we λ c ⎛⎜ K l l ⎝ p, q ⎠ ⎝ p , q ⎠ obtain the following result:

(

)

((

))

Theorem 3.2 [7]: Suppose p, q, and k satisfy 1 ≤ p ≤ q

suppose that 3 ≤ k ≤ p + q. Further ⎢ p+q⎥ = t . If ( β , s, pl , ql ) is an optimal 4p>⎢ ⎣ k − 1 ⎥⎦

and

tuple, then

λc( k ) ( K p ,q

( β ′, s′, p ′ , q ′ ) l

Suppose that

( β , s, pl , ql )

and

are two realizable 4-tuples for fixed

p, q, and k where 1 ≤ p ≤ q and 3 ≤ k ≤ p + q. We

define

( β , s, pl , ql ) < ( β ′, s′, pl′ , ql′ ) if and only if / β = β and β ≤ s; or

ii.

β <β .

/

It is easy to show that < is a total ordering. Remark: It should be noted that

( β , s, pl , ql )

<

( β ′, s′, p ′ , q ′ ) does not imply that e (( β , s, p , q )) e ( ( β ′, s′, p ′ , q ′ ) ) . Thus the optimality of a 4-tuple

<

l

l

l

l

l

does not pertain to the ordering < but rather to the size of the associated failure state.

224

) )

( ) (

equivalently, tβ + t − s = p. Let β / , s / , p / , q / be l l / / / β , s , p , q ≤ β , s , p , q / . If realizable with l l l / / / β =β , suppose s = s. Then

(

/ p = tβ + t − s < tβ + t − − s .

(

( )

)

(

Hence

)

/ p = p − s / β + t − s / (β + 1) = p − tβ + t − s / < 0, l which contradicts the hypothesis that β / , s / , p / , q / is l l ′ realizable. Thus s = s , so β , s , p , q and l l / β / , s / , p / , q / are the same 4 – tuple. If β < β , then l l tβ / + t − s / − (tβ + t − s ) =

Thus

(

(

)

)

l

t + s − s/ >

/ −

t − t ≥ 0.

t / + t − s / > t + t − s ) = p,

hence

/ / / p = tβ + t − s + p > p , which is a contradiction. l / Therefore β < β cannot occur. The proof for q = 0 is l similar. ■ IV.

CONCLUSION

The k-component order edge connectivity of a graph G is defined as the minimum cardinality of a set of edges F such that the subgraph G – F contains no component of order at least k. We refer to any subgraph of the form G – F containing no component of order at least k as a kcomponent order edge – failure state of G. In this work we studied this parameter for complete bipartite graphs K p ,q as measuring vulnerability in designing MIMO

/

i.

l

q l = 0 then no smaller realizable 4-tuple exists. Proof: If pl = 0, then sβ + (t − s )(β + 1) = p or,

l

Definition 3.4 [7]:

a

b)

/ −

Our next lemma concerns realizability. We determine an ordering on the realizable 4-tuples to enable a concise statement of the lemma.

(β , s, pl , ql ) is

realizable 4 – tuple. If a) p l = 0 then no larger realizable 4-tuple exists.

) ( ) = pq − ( s ⋅ β ⋅ ( k − 1 − β ) + ( t − s )( β + 1)( k (− 2 − β ) + p) ⋅ q ) (β β ) ( ) (β β ) ) (β (β .

l

p, q, and k be satisfy 1 ≤ p ≤ q

and 3 ≤ k ≤ p + q and suppose

otherwise q l ≤ k − 2 − β .

(

CITEE 2012

antennas. Our method for computing the parameter is to find a k-component order edge-failure state of K p ,q of maximum size, and then subtracting it from pq, where p represents Tx antenna and q represents Rx antennas, respectively, or vice versa. Under the assumption 1 ≤ p ≤ q and 3 ≤ k ≤ p + q, the result of theorems show that MIMO antennas design fail when the

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

connections between Tx and Rx have maximum component failure state at most K β ,k −1− β . For clarity of our discussion in MIMO antennas application represented by complete bipartite, we consider 3-component edge, here, means one – to – one correspondence that each transmitter has only one connection to the receiver, and vice versa. We definitely

ISSN: 2088-6578

⎢ p + q⎥ , there will be p – q isolated p>⎢ ⎣ k − 1 ⎥⎦ antennas in p side. For one – to – one connection result in both cases, it ca be simplified that,

While if

⎢ p + q⎥ ⎢⎣ k − 1 ⎥⎦ : 2 p ≤ p + q → p ≤ q ⎢ p + q⎥ p>⎢ : 2 p > p + q → p > q. ⎣ k − 1 ⎥⎦ p≤

⎢ p + q⎥ sure that if p ≤ ⎢⎣ k − 1 ⎥⎦ , there will be q – p isolated antennas in q side and the number of edges failure set as stated on Corollary 3.1, as seen in the Table 4.1 below.

Table 4.1 shows the one – one connection as the minimum requirement that the transmitter and receiver is able to communicate

2≤k ≤n

n

p Case 1:

q

(k ) λ c ⎛⎜ K

⎞⎟ ⎝ p, q ⎠

⎢ p + q⎥ p≤⎢ ⎣ k − 1 ⎥⎦ 3 3 3 3

12 12 12 12

3 3 3

12 12 12

6 5 4 3 Case 1: ⎢ p + q⎥ p>⎢ ⎣ k − 1 ⎥⎦ 7 8 9 [5]

REFERENCES [6] [1]

[2]

[3]

[4]

F. Boesch, D. Gross, L. Kazmierczak, A. Suhartomo, and C. Suffel, “Component order edge connectivity: an introduction”, Congressus Numerantium 178 (2006), 7 - 14. F. Boesch, D. Gross, L. Kazmierczak, C. Suffel, and A. Suhartomo, “Bounds for the Component Order Edge Connectivity”, Congressus Numerantium 185 (2007), 159-171. F. Boesch, D. Gross, L. Kazmierczak, J. T. Saccoman, C. Suffel, and A. Suhartomo, “A Generalization of an Edge-Connectivity Theorem of Chartrand”, Networks—2009—DOI 10.1002/net. G. Chartrand and L. Lesniak, “Graphs & digraphs”, Chapman & Hall/CRC, Boca Raton, 2005.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

[7]

FORMULA

6 7 8 9

30 30 (2 isolated in q) 28 (4 isolated in q) 24 (6 isolated in q)

5 4 3

30 (2 isolated in p) 28 (4 isolated in p) 24 (6 isolated in p)

A. Suhartomo., “Component Order Edge Connectivity: A Vulnerability Parameter for Communication Networks”. Doctoral Thesis, Stevens Institute of Technology, Hoboken NJ , May 2007. Wikipedia, the free encyclopedia, “”MIMO, retrieved on April 6, 2012 Daniel Gross, John T. Saccoman, L. William Kazmierczak, Charles Suffel, Antonius Suhartomo: “On Component Order Edge Connectivity Of A Complete Bipartite Graph”, accepted in 2008 to be published at Art Combinatory USA for waiting list up to 2014

225

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

Robust Image Transmission Using Co-operative LDPC Decoding Over AWGN Channels M. Agus Zainuddin

Yoedy Moegiharto

Study Program of Multimedia Broadcasting Electronics Engineering Polytechnic Institute of Surabaya (EEPIS) Surabaya, East Java, Indonesia [email protected]

Study Program of Telecommunication Electronics Engineering Polytechnic Institute of Surabaya (EEPIS) Surabaya, East Java, Indonesia [email protected]

Abstract— In this paper we propose co-operative decoding algorithm to transmit compressed image data over AWGN (Additive White Gaussian Channel) channels. This method has a good error correction performance even when channels condition at Eb/N0 0 dB. With development in multi-core processor architecture and multiple antennas for wireless communications, they make this technique is visible to be applied for the next wireless generation. In sum-product algorithm of LDPC code, if a bit is known to be correct, increasing its soft value can help correct errors via the positive message sent from this bit. By exploiting signals from multiple decoders we can improve the decoding process. This method not only improved BER performance, but also make the decoding process faster. Keywords-component; low density parity check, iterative decoding, cooperative LDPC decoding, robust image transmission.

I.

INTRODUCTION

Over the last decades, there has been an increasing demand for high date rate communications. With the integration of wireless technologies and multimedia services, transmitting high quality images and video has become one of the main objectives for next generations (4G) mobile network systems. Due to the enormous size of raw multimedia signals, it is necessary to represent them in compressed form in order to make use of resources, i.e. storage memory, channel bandwidth, more efficiently. SPIHT and JPEG 2000 are two examples of image compression algorithms which can achieve high compression ratio. However the compressed bit stream will suffer from channel disturbance, even for relatively small errors. This dictate the use of channel coding step to protect the transmitted information before sending multimedia signals through channel. LDPC and Turbo codes have shown remarkable result by their limit approaching performance established by Shannon. In this paper i propose a method to improve the performance of LDPC decoding over parallel noisy channels. Parallel channels can be easily encountered in wireless channels employing multiple antennas or internet with multiple routing path. In particularly we make use of channel diversity and feedback stage for joint decoding to obtain better results compared to single LDPC decoding.

226

The organization of the paper is as follows: Section 2 introduces briefly SPIHT algorithm dan cooperative LDPC decoding, Section 3 discusses simulation and performance evaluation, and finally in Section 4 is conclusion. II.

IMAGE COMPRESSION AND CO-OPERATIVE LDPC DECODING

II.1. SPIHT Image Compression A coding algorithm developed for Discrete wavelet transform (DWT) transformed images is the set partitioning in hierarchical trees algorithm (SPIHT) which has been developed by Said and Pearlman in 1996 [1]. The basic SPIHT algorithm, as it has been presented by Said and Pearlman makes intensive use of dynamic data structures to exploit the self similarities. The parent-child relations of the wavelet coefficients are shown in Figure 1. In order to exploit the self-similarities during the coding process, oriented trees of four offspring are taken for the representation of a wavelet transformed image. Each node of the trees represents a coefficient of the transformed image. The levels of the trees consist of coefficients at the same scale. The trees are rooted at the highest scale of the representation. The SPIHT algorithm assumes that each coefficient aij is a good predictor of the coefficients which are represented by the sub-tree rooted by aij. The overall procedure is controlled by an attribute, which gives information on the significance of the coefficients. More formally, a coefficient is insignificant with respect to a threshold t if its magnitude is smaller than 2t. Otherwise it is called significant with respect to the threshold t. In the basic SPIHT algorithm, the coefficients of a wavelet transformed image are classified into three sets, are:  LIP (list of insignificant pixel), which contains the coordinates of those coefficients that are insignificant with respect to the current threshold t.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

Figure 1: Illustration of the parent-child relation for the coefficients of SPIHT.  

LSP (list of significant pixel), which contains the coordinates of those coefficients which are significant with respect to t. LIS (list of insignificant sets), which contains the coordinates of the roots of insignificant sub-trees.

During the compression procedure, the sets of coefficients in LIS are refined and if coefficients become significant they are moved from LIP to LSP. The bit stream can thus be progressively organized.

ISSN: 2088-6578

Figure 2. The LLR improvement process for the i codeword bit at the N decoder. Where: yi,N denotes LLR message from channel for codeword bit i at decoder N; rk,i,N denotes LLR messages from check node k to bit node i at decoder N; qi,j,N denotes LLR message from bit node i to check node j at decoder N; Iter x ai means that the LLR improvement is done in each iteration, until a curtain condition is reached. The LLR improvement process follows this rule: yi ,1 , yi , 2  0 t ,  ai   t , yi ,1 , yi , 2  0  otherwise 0,

(1)

The APP LLR for Xi from yi and even Si by take into a count ai can be expressed by: L(Qi , N )  L( X i , N )  ai 

 L(r

j ,i, N )

(2)

j A j

The performance of SPIHT algorithm compared against JPEG 8x8 and JPEG 16x16 is shown in Figure 2. We can witness that SPIHT has higher PSNR than both JPEG algorithm.

Where

II.2. Cooperative LDPC Decoding One of decoding method of LDPC codes is the sumproduct algorithm. In the sum-product algorithm, the exchanged information between the bit nodes and check nodes in each iteration is in the form of . The aim of sum-product decoding is to compute the APP (a posteriori probability) for each codeword bit. Which is the probability that the i codeword bit is a 1 conditional on the event S, that all paritycheck constraints are satisfied [2]. In decoding process of LDPC code, if a bit in received codeword is known to be true, increasing its soft value can help the decoding process to remove errors [3]. In our method, we use two LDPC decoders that work together in each iteration process. The system design seems like SIMO (singleinput multiple-input) system in wireless transmission. A bit is said to be true if the hard decision of two decoders have the same value. Since the data sent by transmitter is the same data, and hard decision at both decoders has the same value, it means that the received bit is correct. By increasing its LLR, it can help the error correction process for other bits that connected in Tanner graph. The graph of LLR improvement process is shown in Figure 2.

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

 P X i , N  1 | y N , Si , N  L(Qi , N )  log e    P X i , N  1 | y N , Si , N 

(3)

 P X i , N  1 | yi , N  L( X i , N )  log e    P X i , N  1 | yi , N 

(4)

L( X i , N )  2 yi , N /  2  4 yi , N 1  L(r j ,i , N )  log e  1 

 

Eb N0

 

(5)

   

tanh L qi ', j , N / 2   tanh L qi ', j , N / 2   i 'B j , i '  i i 'B j , i '  i

(6)

Or



L(r j ,i , N )  2. tanh 1 (

i 'Bi , i '  i

tanh( L(qi , j , N ) / 2))

(7)

Where the notation Bj represents the set of column location of the bits in the j parity-check equation. While Ai is the set of row locations of the parity-check equations which check on codeword bit i [4]. The algorithm of co-operative LDPC decoding: 1.

Initialization. Each codeword bit i from each decoder N, sends LLR message to every check node which is

227

ISSN: 2088-6578

Yogyakarta, 12 July 2012

connected to it. The LLR messages are from channel and LLR improvement. L( X i , N )  L(qi , N )  4 yi , N

2.

(8)

Check-to-bit. Each check node j sends LLR message to every bit node i which is connected to it. 1  L(r j ,i , N )  log e  1 

3.

Eb  ai N0

 

 

   

tanh L qi ', j , N / 2   tanh L qi ', j , N / 2  i 'B j , i '  i  i 'B j , i '  i

CITEE 2012

Figure 3. On the receiver side there are two single-decoder system. The output of the two decoders are IEEE 802.16e channel 1 and IEEE 802.16e channel 2. By using Cooperative LDPC decoding algorithm, both decoder cooperate in every iteration to eliminate the effects of noise. The maximum number of decoding iteration is 50. BER performance comparison of co-operative decoding with two-single LDPC decoder, and selection diversity is shown in Figure 4.

(9)

Codeword test: All incoming LLR messages at bit node i at decoder N combined as soft decision. L(Qi , N )  L( X i , N )  ai 

 L(r

j ,i, N )

(10)

j A j

Hard decision for each bit node i: 1, Xˆ i , N   0,

Figure 3. Parity-check matrix for WiMAX with code rate ½.

L(Qi , N )  0

(11)

L(Qi , N )  0

BER Performance of Co-operative LDPC Decoding

0

10

IEEE 802.16e Channel 1 IEEE 802.16e Channel 2 Selection Diversity Co-operative LDPC Decoding

If one of Xˆ N  [ Xˆ 1, N , ..., Xˆ n, N ] meet the constraint

H  Xˆ iT, N  0 , then Xˆ i  Xˆ i , 2 . The estimate message mˆ can be extracted from Xˆ , since the codeword are systematic.

4.

Bit-to-check: bit node i sends LLR message to check node j. L(qi , j , N ) 

 L( r

i, j ' ) 

L( X i , N )  ai

(12)

j ' A, j '  j

Back to step 2. III.

SIMULATION AND PERFORMANCE EVALUATION

The simulations using parity-check matrix for WiMAX with code rate ½, as shown in Figure 2. Decoding performance of Co-operative LDPC decoding compared with the performance of each single decoder, and joint LDPC decoding (our previous method) [5]. In Joint LDPC decoding, we use also use two LDPC decoders with selection diversity, where if H  Xˆ 1T  0 then the output of decoder is mˆ 1 (message part of Xˆ ), otherwise the output will be mˆ (message part of 1

2

Xˆ 2 ).

To simulate the BER (Bit Error Rate) performance, the simulation generate bit stream of 5 million random bits. The bit stream is then encoded using the parity check matrix as in

228

-1

10

-2

Bit Error Rate

H  Xˆ NT  0 , then Xˆ  Xˆ N is a valid codeword, the algorithm terminates. If the maximum iteration number is passed, while none of Xˆ i , N meet the condition

10

-3

10

-4

10

0

0.2

0.4

0.6

0.8 1 Eb/N0 (dB)

1.2

1.4

1.6

1.8

Figure 4. BER Performance comparison of co-operative decoding with others. The system design of image transmission system is shown in Figure 5. The transmitted image is Lena 256 x 256 pixels (p). This image is compressed with SPIHT algorithm 1 bit per pixel (bpp) into bit stream (m), with total transmitted bit is 65,511 bits. The bit stream then grouped into 1,152 bits per-block and encoded with LDPC encoder (IEEE 802.16) into 2,304 bits per-codeword. Each codeword then modulated with BPSK modulator and transmitted through AWGN channels to the receiver. At the receiver side, BPSK demodulator estimates the CSI (Channel State Information), and sends it to the corresponding decoder. Each LDPC decoder can be viewed as a single LDPC decoder on channel 1 (y1), and channel 2 (y2). If we use single decoder to transmit image, we have image reconstruction pˆ1 from LDPC decoder 1 and image reconstruction pˆ 2 from LDPC decoder 2. The simulation result

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

of the system is shown in Table 1, and the image reconstruction is shown in Figure 6 and 7.

ISSN: 2088-6578

By implementing this method we can get not only better BER performance, but also make the decoding process two times faster than by using single decoder as shown in Table 3. Table 3. Required maximum iteration number.

Figure 5. Video broadcast transmission system design. From Table 1, it shown that BER performance of Cooperative decoding has fewer error bits at the receiver compared to single LDPC decoder, and selection diversity. The transmission system has no error bit with Eb/N0 more than 1 dB with Co-operative LDPC decoding. Table 1. The number of received error bits. Eb/N0 0 0.5 1 1.5

Decoder 1 8.224 5.868 1.228 0

Number of Error Bits Decoder 2 Diversity 8.029 8.029 5.596 5.349 1.276 317 79 0

Cooperative 216 0 0 0

The visualization result of image transmission at Eb/N0 0 dB is shown in Figure 6. It is shown that the image quality using Co-operative LDPC decoding is much clearer than single LDPC decoder and selection diversity. From Figure 7, it is shown that no difference between reconstruction and original image using cooperative LDPC decoding. In simulation we use t in equation with t = 0.1, according to simulation to transmit 115,200 bits with various t values as shown in Table 2. Table 2. The number of error bits for various t value. t

0.2 dB

1 dB

4.23 0.01 13.82 107.25

0 0 2.11 76.81

0 0 0 5.55

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

Co-operative Decoding

0 0.2 0.4 0.6 0.8 1

50 49.99 49.72 48.72 40.58 36.42

26.02 21.66 18.51 16.18 13.89 12.27

IV.

CONCLUSION

This paper has described a new method to improve decoding performance of LDPC decoder by employing channel diversity. The simulation results show that the proposed method has good BER performance and rapid process, but it needs two LDPC decoders. The quality of image reconstruction is much better than using single LDPC decoder. For the future works we would test the method to wireless system using MIMO (Multiple Input Multiple Output) and OFDM (Orthogonal Frequency Division Multiplexing) and adding the length of code. REFERENCES [1]

[2]

[3]

[4] 0.05 0.1 0.5 1

Single Decoding

As shown in Table 3, co-operative decoding able to decode the message from channels. The transmission errors that occur due to decoder produce a valid codeword other than transmitted codeword. So the number of error bits is the minimum distance of code. By adding the length of codeword, the minimum distance increases, and decoding errors is reduced.

Eb/No 0 dB

Eb/N0

[5]

A. Said and W. A. Pearlman, “A new fast and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circuits Syst. Video Tech., vol. 6, pp. 243-250, June 1996. Daniel J. Costello, Jr., “An Introduction to Low-Density Parity Check Codes”, Department of Electrical Engineering University of Notre Dame, August 10, 2009. L. Pu, Z. Wu, A. Bilgin, M. W. Marcellin, B. Vasic, "Iterative Joint Source-Channel Decoding for JPEG2000," in Proceedings of 2003 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, Nov. 2003. Sarah J. Johnson, Steven R. Weller, “Low-Density parity-check codes: Design and decoding”, School of Electrical Engineering and Computer Science University of Newcastle, Australia, June 20, 2003. Aries Pratiarso, M. Agus Zainuddin, ”Transmisi Citra Menggunakan Joint LDPC Decoding”, Seminar Nasional Aplikasi Teknologi Informasi (SNATI), Yogyakarta, Indonesia, 2009.

229

ISSN: 2088-6578

Yogyakarta, 12 July 2012

CITEE 2012

IEEE 802.16 Channel 1

IEEE 802.16 Channel 2

PSNR = 1.8787 dB Selection diversity

PSNR = -1.9861 dB Cooperative Decoding

PSNR = -1.9861 dB

PSNR = 29.0268 dB

Figure 6. Reconstructed Lena image for various decoding schemes with received Eb/N0 0 dB.

IEEE 802.16 Channel 1

IEEE 802.16 Channel 2

PSNR = 13.0571 dB Selection diversity

PSNR = 19.7172 dB Cooperative Decoding

PSNR = 21.8368 dB

PSNR = 34.968 dB

Figure 7. Reconstructed Lena image for various decoding schemes with received Eb/N0 1 dB.

230

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Parameter Measurement of Acoustic Propagation in The Shallow Water Environment Tri Budi Santoso1), Endang Widjiati2), Wirawan, Gamantyo Hendrantoro3) Politeknik Elektronika Negeri Surabaya, 2)Laboratorium Hidrodinamika Indonesia, 3)Jurusan Teknik Elektro ITS [email protected]

1)

Abstract—This paper present a measurement report of underwater acoustic propagation parameter such as ambient noise, attenuation, time delay, multipath, and power delay profile. Measurement have carried out in a water tank with dimension of 12 x 180 meters, and 6 meters deep. By using the maximum likelihood estimation (MLE) technique was obtained that the channel has an ambient noise with a gaussian distribution, and spectral shapes are dominant at low frequencies. The multipath channel was characterized by the power delay profile and fading. The power delay profile 32 ms happened when the distance between transmitter and receiver was 80 m, and 22 ms when the distance transmitter and receiver was 150 m. The fading phenomenon was characterized by fluactuation of the signal envelope attenuation from -40 dB to 10 dB. Key words; acoustic propagation, multipath, power delay spread

I.

with a high activity, by using acoustic signals at frequencies above 20 kHz has also been carried out [7]. Observations were made at very shallow water conditions, at a depth of 3 meters from the surface, with two different transmitter-receiver distance of 200 meters and 500 meters. Characteristics of the channel impulse response, scattering, intensity profiles were presented to track multiple operating frequencies from 20 kHz to 100 kHz. This paper presents a propagation parameter measurements of underwater acoustic signals based on the measurement data that has been done in a towing tank. Analysis method used is a combination of [4] [5], [6], and [7], with an approach to the statistical properties of the measurement data have been obtained. The paper is organized as follows.The concept of multipath channel propagation is outline in section 2, and the experimental set up are described in section 3. Section 4 devoted to statistical analysis of measurement result. Section 5 summarizes the conclusion.

INTRODUCTION

Indonesia is an archipelago comprising of 17,508 islands, with a long coastline approximately of 81.000 km, and 70% of the region is oceans. Indonesia lies between the Pacific Ring of Fire and the Alpide belt. This causes the marine environment in Indonesia to haveown characteristics with various speeds and direction of the wind, sea waves, and bathymetry. It is a challenge to conduct research and develop underwater communication technology to support underwater acoustic sensor network development. In the implementation, underwater acoustic communication system have to face with the worst channel condition, namely large delay spread, Doppler effect due to relative motion between transmitter and receiver, and limited bandwidth[1], [2]. Most research on underwater acoustic communication was done through simulation. The cost for measurement testing is relatively expensive, so that data processing is ussualy performed off line by using from the recording from measurement. Some experimental measurement is performed with the addition of ambient acoustic noise, enabling 'replayed' experiments at various values of signal-to-noise ratio in the laboratory [3]. Characterization of underwater acoustic propagation with a statistical approach to the data have been presented in [4], [5], and [6]. By using the result of propagation parameter measurements made at high frequency, a statistical analysis can be carrierd out. Underwater acoustic channel characterizations for an environment

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

II.

MULTPATH CHANNEL

In general, the concept of underwater multipath channel has the same basic characteristics with the radio channel. It can be approached by adopting the concept of multipath channel in the radio. Signal propagation from transmitter to the receiver can take a variety of path, the receiver will obtained a various signal with the different time and magnitude. The signal path can be a line of sight (LOS), the reflected signal from surface, and the reflected signal from the bottom as shown in Figure 1. Surface

Signal Receiver (Rx) Signal Source (Tx)

Bottom

Figure 1. Underwater multipath channel.

Multipath channel has a different attenuation factor and time delay. This gives the effect on the amplitude and

1 | 6/8/2012

231

ISSN: 2088-6578

Yogyakarta, 12 July 2012

time of arrival of signals at the receiver. If the signal from the transmitter as a complex form s(t) = R[ej2fct], the bandpass signal at the receiver can be represented as:

 t st   t  n

CITEE 2012

towing tank, with dimensions (12 x 200) m. These measurements were carried out with uniform medium conditions, no waves, and no sources of transient noise.

(1)

n

Where n(t) and τn(t) are the attenuation factor and the time delay of the nth path The low-pass equivalent channel can be described as a time varying channel impulse response as follows:

c ; t    n t e 2f c nt (t )

(2)

n

When c(τ;t) is modelled as a complex Gaussian with zero mean, the envelope of |c(τ;t)| at a time t will has a Rayleigh distribution This condition occurs in the propagation channel without line-of-sight (LOS) path. When the signal propagation has a LOS path, c(τ;t) is modelled as a complex Gaussian with no zero mean, the propagation channel expressed as Ricean distribution Multipath intensity profile or power delay profile presents the average power output as function of time as folows: t

P  

1 T

T 2

 r s, 

2

ds

(3)

T t 2

Parameters associated with power delay profile are the mean excess delay, RMS delay spread, and excess delay spread. Mean excess delay is the first moment of the power delay profile:

 P    P  k

k

(4)

k

k

k

RMS delay spread is the square root of the second moment of delay profile, and is defined as





B. Reference Signal Measurement Reference signal measurement was carried out with placing the source (underwater speaker) and the receiver (hydrophone) within 1 m. The generated sound are: pulse train with 1 second between pulse, chirp signal with frequency sweep from 100 Hz to 17000 Hz, in 17 sec duration, and a sinusoid signal with frequencies (8000, 9000, 10000, ... 17000) H, in 5 sec durations. The chirp signal was used in the experiment as in the following equation: x(t) = Acos(2πf(t)t + φ) (6) The output of chirp signal as in Figure 3.

(5)

 P    P 

2 k

k

2

A. Ambent Noise Measurement Ambient noise measurements carried out by recording noise coming from the environment, with a minimum activity to avoid unwanted noise. Recording was performed for 30 seconds, the data obtained from three hydrophones.

2

   2   where

Figure 2. Measurement activity in Laboratorium Hidrodinamika Indonesia (LHI)

k

k

k

Maximum excess delay (X dB) is the multipath delay time, where the energy of input signal at the receiver is smaller than 20dB. III.

EXPERIMENTAL SETUP

Parameter measurements of underwater acoustic propagation in this study are based on the method in paper [3], [4] and [5]. Initial experiments have been performed on laboratory-scale towing tank. A series of measurements have been carried out in a concrete towing tank, with dimensions (3 x 12) m with a depth of 6 m. The second measurement run at the bigger

232

Figure 3. Spectrogram of the chirp signal

C. Measurement at 80 ~ 180 Meters Measurement started at a distance of 80 m, 100 m, 120 m, and 150 m by using the same signal as used in the

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter 2 | 6/8/2012

CITEE 2012

Yogyakarta, 12 July 2012

measurement of the reference signals. Set up the measurement equipment as shown in Figure 4. The noise signals generated in the PC-1, transmitted through an underwater speaker at a point 3 m from the surface. The position of the receiver within 80 ~ 150 m from transmitter (underwater speaker). The receiver using a vertical array of three hydrophones with 50 cm between hydrophones.

ISSN: 2088-6578

A. Channel Impuls Response By using equation (3), the channel impulse response can be obtained by testing with a narrow pulse signal duration. The testing process has been conducted on various transmitter-receiver distance. Power signal at the receiver within 80 m from the transmitter as in Figure 7. Pdf Ambient Noise 0.02 Meas. Data Gaussian

0.018

0.016 Signal Generator

Signal Recorder

0.014

Dig Mixer

0.012

2m

Surface

Prob. density

Power Amp

0.01

3m

0.008 0.5 m

H4 H3

0.006

0.004

H2 H1

0.002 2.5 m

3m

Speaker

0 -0.01

Bottom

-0.008

-0.006

-0.004

-0.002

0 Value

0.002

0.004

0.006

0.008

0.01

Figure 6. Probability density function of ambient noise

100 m

Figure 4. Experimental Setup

0 -2

IV.

-4

STATISTICAL ANALYSIS Power Relatif dB

-6

A. Ambient Noise Characteristics Frequency noise characteristics was not flat. Noise tends to be dominant in the low frequency region, less than 10 kHz, while for frequencies above 10 kHz have a flat distribution as in Figure 5.

-8 -10 -12 -14 -16

Periodogram Power Spectral Density Estimate -90

-18 -20

-100

0

500

1000

1500

2000

2500

waktu mdt

Power/frequency (dB/Hz)

-110

Figure 7. Power delay profile at the receiver

-120

One method for testing the channel impulse response can be done using the Chirp signal has a frequency range from 200 Hz to 17 000 Hz. In Figure 8 show a signal for the transmit side lobe contained in the stretch position and -300 ms + 300 ms.

-130

-140

-150

Auto Correlation

-160

0

-170

0

5

10 Frequency (kHz)

15

20

-10

Figure 5. Power spectral density (psd) of ambient noise

Analysis of measurement data with maximum likelihood estimation (MLE) illustrated that the environmental noise signals close to the nature of Gaussian distributed random signal with zero mean and 0.028 variant. Comparison of probability density function (pdf), data measurements and pdf Gaussian noise environment as in Figure 6.

Magnitude (dB)

-20

-30

-40

-50

-60 -500

-400

-300

-200

-100

0 Time (ms)

100

200

300

400

500

Figure 8. Autokorelation of Chirp Signal Reference

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter

233 3 | 6/8/2012

ISSN: 2088-6578

Yogyakarta, 12 July 2012

The testing process was continued with the delivery of the Chirp signal transmitter-receiver distance of 80 m, 100 m, up to 150 m. Form of the correlation signal at the receiver berkjarak 150 m from the reference signal as shown in Figure 9. Cross Correlation 0

-10

Magnitude (dB)

-20

-30

-40

V.

CONCLUSION

This paper has presented an underwater acoustic channel characterization using measurement data in the towing tank measuring 12 x 200 x 6 meters at a stationary condition. The results of statistical analysis using maximum likelihood estimation technique suggests that the channel has an ambient noise has a Gaussian distribution, the attenuation due to multipath Ricean distribution is close to the form, and the maximum excess delay of 20 milliseconds. In the next study will be developed on the coastal environment in Surabaya to obtain more accurate results about the characteristics of the underwater acoustic channel for tropical marine environment.

-50

-60 -500

CITEE 2012

ACKNOWLEDGEMENT -400

-300

-200

-100

0 Time (ms)

100

200

300

400

This work was supported by BPPS grant fiscal year 2011. We would thank to Laboratorium Hidrodinamika Indonesia (LHI), which has given support to carry out the measurements, so the research could proceed smoothly.

500

Figure 9. Correlation of chirp signal at 150 m receiver

By using a horizontal line parallel with the magnitude value of -20 dB is obtained at the time spread of -100 to 100 m sec. From the result calculation is indicated that the length of maximum excess delay of 10 ms. Data power delay profile parameter measurements for transmitter-receiver distances range are 80 m to 150 m as in Table 1.

REFERENCES [1]

[2]

[3] TABLE 1. POWER DELAY PROFILE Mean Tx-Rx (m) RMS Delay Excess distance Spread Delay

80 100 120 150

0.9288 0.9929 0.9276 0.2295

0.0072 0.0066 0.0052 0.0066

Excess Delay Spread

[4]

0.0325 0.0291 0.0263 0.0220

[5]

B.

Fading Characteristics Envelope signal with a frequency of 4000 Hz with a working distance between the transmitter and receiver within 120 m as in Figure 11, the signal attenuation from 10 dB to -50 dB. Pelemahan pada jarak 200 m

[6]

[7]

[8]

10

Milica Stojanovic, Underwater Acoustic Communications: Design Considerations on the Physical Layer, Proc. of Wireless on Demand Systems and Services 2008, WONS 2008. Milica Stojanovic, James Priesig, Underwater Acoustic Communication Channels: Propagation Models and Statistical Characterization, IEEE Communciation Magazine, January 2009. Andrew C. Singer, Jill K. Nelson, and Suleyman S. Kozat, Signal Processing for Underwater Acoustic Communications, IEEE Communications Magazine, January 2009. Parastoo Qarabaqi, Milica Stojanovic, Statistical Modeling of a Shallow Water Acoustic Communication Channel, Proc. of Underwater Acoustic Measurements Conference, Nafplion, Greece, June 2009. Andreja Radosevic, John G Proakis, Milica Stojanovic, Statistical Characterization and Capacity of Shallow Water Acoustic Channels, Proc. of IEEE OCEANS09 Conference, Bremen, Germany, May 2009. Mandar Citre, John Potter, and Ong Sm Hang, Underwater Acoustic Channel Characterization for Medium-Range Shallow Water Communications, OCEANS 04. Brian Borowski, Characterization of a Very Shallow Water Acoustic Communication Channels, Proc. of OCEANS09, MTS/IEEE Biloxy-Marine Technology for Our Future: Global and Local Chalenges. John G Proakis, Digital Communication, 3-rd Edition, Prentice Hall, Singapore, 1996.

0

-10

Pelemahan (dB)

-20

-30

-40

-50

-60

-70

0.5

1

1.5

2

2.5 Sampel

3

3.5

4

4.5

5 4

x 10

Figure 10. Envelope fading

234

DEEIT, UGM – IEEE Comp. Soc. Ind. Chapter 4 | 6/8/2012

CITEE 2012

Yogyakarta, 12 July 2012

ISSN: 2088-6578

Radio Network Planning for DVB-T Repeater System Integrated with Early Warning System Herry Imanta Sitepu1 Departemen Sistem Komputer Institut Teknologi Harapan Bangsa Jln. Dipati Ukur no. 80-84 Bandung 40132, West Java, Indonesia

[email protected] Dina Angela2, Tunggul Arief Nugroho3, Sinung Suakanto4 Departemen Teknik Elektro Institut Teknologi Harapan Bangsa Jln. Dipati Ukur no. 80-84 Bandung 40132, West Java, Indonesia

[email protected], [email protected],