0% found this document useful (0 votes)
64 views

Performance Counter Monitoring Mechanism For Measuring QoS Attributes in SOA

The document discusses performance counter monitoring to measure quality of service (QoS) attributes in service-oriented architecture (SOA). It proposes using custom windows performance counters (CWPC) at the provider-side to monitor web services and overcome limitations of using only predefined counters. It describes implementing CWPC monitoring and examining the effect of monitoring on service providers to determine a suitable monitoring interval. Experimental results are presented on measuring QoS attributes like response time and throughput using performance counter monitoring.

Uploaded by

Saïd Maaroufi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Performance Counter Monitoring Mechanism For Measuring QoS Attributes in SOA

The document discusses performance counter monitoring to measure quality of service (QoS) attributes in service-oriented architecture (SOA). It proposes using custom windows performance counters (CWPC) at the provider-side to monitor web services and overcome limitations of using only predefined counters. It describes implementing CWPC monitoring and examining the effect of monitoring on service providers to determine a suitable monitoring interval. Experimental results are presented on measuring QoS attributes like response time and throughput using performance counter monitoring.

Uploaded by

Saïd Maaroufi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No.

3, 2010

Performance Counter Monitoring Mechanism for Measuring QoS Attributes in SOA


Bahareh Sadat Arab
Universiti Putra Malaysia Faculty of computer science and information technology, UPM, 43400 upm serdang, selongor Malaysia Kuala Lumpur, Malaysia [email protected]

Abdul Azim Abd Ghani


Universiti Putra Malaysia Faculty of computer science and information technology, UPM, 43400 upm serdang, selongor Malaysia Kuala Lumpur, Malaysia [email protected]

Rodziah Binti Atan


Universiti Putra Malaysia Faculty of computer science and information technology, UPM, 43400 upm serdang, selongor Malaysia Kuala Lumpur, Malaysia [email protected]

Abstract Nowadays, many similar web services with same functionalities have been developed around the world. When multiple web services provide same functionalities, Quality of Service (QoS) turns to an important issue. In this paper, monitoring is used in order to measure QoS attributes of web services in Service-Oriented Architecture (SOA). Windows Performance Counters (WPC) is a windows component which supports application monitoring. The main focus of this paper is applying performance counters for monitoring of web services in order to measure their QoS attributes such as response time and throughput at provider-side. We introduce CWPC monitoring approach in detail and we describe employing performance counter facilitate QoS measurement process. Additionally, the results of monitoring via performance counter for measuring QoS attributes such as response time and throughput are presented which can be considered for taking management decisions like adjusting proper monitoring interval. Keywords- monitoring, performance counter, web services, Quality of Service, SOA

Besides, QoS values can be obtained by getting information from the service provider, service consumers feedback or via monitoring. This study focuses on measuring nondeterministic QoS attributes such as response time and throughput via monitoring of web services. Several researches discuss the need to monitor web service for their performance measurement. Monitoring of web services can be deployed on the service consumer-side (clientside), the service provider-side (server-side) or a third party. Some monitoring approaches are designed for consumer-side and the monitoring codes should run at consumer-side. The advantage of consumer-side monitoring is that it is independent from the web service implementation but it needs consumer agreements for running the monitoring code which has its own cost. Besides, each service consumer can monitor only a small number of service requests from total service requests so the measured data does not precisely represent provided quality of the monitored service. Provider-side monitoring is almost an accurate approach for measuring performance and quality of web services but it requires accessing to the actual service implementation which is may not be possible. In some monitoring approaches, a trusted third party is responsible for monitoring of web services. The third party as an independent monitoring entity is located between service consumers and the service provider to intercept all service invocations. The advantage of monitoring via third party in compare to consumer-side or provider-side monitoring is that the obtained results are more reliable. Due to the fact that the monitoring code is secured and it cannot be modified objectively to change measured data or monitoring result by any of service consumers or the service provider. On the other hand, disadvantage of these monitoring approaches is the possibility of performance bottleneck. As all communication between service consumers and service provider are performed through the third party the performance bottleneck can be happened. In this research, monitoring at provider-side is used in order to measure nondeterministic QoS of web services. Provider-side monitoring is almost an accurate approach for measuring performance and quality of web services. The monitoring codes run at the service provider to monitor all of its service requests.

I.

INTRODUCTION

The most important and popular technology that implements the Service-Oriented Architecture (SOA) is web services. Web services are loosely coupled and reusable software components which can be discovered and invoked dynamically. Web services are defined by using a set of standards such as SOAP (Simple Object Access Protocol) [11], UDDI (Universal Description Discovery Integration) [5] and WSDL (Web Service Description Language) [10]. The recent growth of interest in web services has led to the development of many similar services with the same functionalities all over the world. As a result, QoS of web services becomes important in web service selection and composition area. QoS attributes for web services may include cost, availability, reliability, response time and throughput [7]. QoS attributes are classified in two groups; deterministic and nondeterministic [3]. The deterministic group includes quality attributes that are known and certain at the service invocation time such as cost, whereas the nondeterministic contains of quality attributes that are uncertain at the web service invocation time like response time.

256

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

Different monitoring mechanisms are proposed by researchers. Recently, Michlmayr et al. [4] used Windows Performance Counter (WPC) in order to monitor web services. They used WPC for monitoring and measuring QoS of web services at provider-side. WPC is a windows component that presents an object framework for supporting application monitoring. However, using system counters has some limitations and mapping values of predefined counters to QoS values is not always appropriate. In our monitoring approach, Custom Windows Performance Counters (CWPC) are designed and created for monitoring of web services in order to overcome the current limitations of the WPC monitoring mechanism. The performance counter framework can be extended for customized counters and they remain like predefined performance counters in the service provider system. Performance counters run on the service provider and monitor all services service requests. However, CWPC monitoring mechanism has limitation for its implementation and platform dependence. A complete description of CWPC monitoring implementation is presented in this paper. Monitoring approaches has some overhead and performance counters consume machine resources in order to monitor web services. Consequently, provider-side monitoring approach can affect performance of the service provider and it may reduce some quality of services that provided by the service provider. However, there has been little discussion about the effect of monitoring on service providers which is useful for system and management decisions such as choosing suitable monitoring interval. In order to realize suitable monitoring interval for the system, different criteria that can affect monitoring of QoS are examined in this work. The remainder of this paper is organized as follows. Section II reviews related works that are conducted in monitoring of web services. Section III describes the proposed monitoring mechanism and related steps for measuring QoS of web services. Section IV explains the experimental results and discussion. Finally, section V outlines the conclusion and section VI presents the future work. II. RELATED WORK

a data mart proposition generated from web services log data which can be used for evaluation of service provider performance. Web services log is implemented based on SOAP intermediaries to prevent changing the service code although it needs to add SOAP header for extended information. However, in their proposed architecture probing call are different from real consumers call. They used probes in their Web Services Log Architecture which are special web services that call another services frequently in order to measure their QoS such as availability, reliability and response time as a special customer. Raimondi et al. [6] present an SLA monitoring system and describe automatic web service monitoring technique by using handlers. They used monitoring for measuring quality attributes of web services such as reliability, throughput and latency. In their monitoring approach, handlers are used for processing and analyzing SOAP request and response messages of the monitored service. Thio & Karunasekera [9] describe three different approaches for measurement of QoS attributes as low level packet monitoring, proxy, and SOAP engine library modification. The low level packet monitoring is based on tracking SOAP packets. However, the implementation of it is hardware dependent. The second monitoring approach is proxy which is a communication mediator between the service consumer and the service provider in order to measure performance attributes. The third approach is the SOAP engine library that is modified to log measured information for measurement of QoS attributes. A major drawback of third approach is that it needs to distribute SOAP library modification on different implementations and platforms. The most related monitoring approach is presented by [4]. They propose a framework which combines consumer-side QoS monitoring that using the QUATSCH tool [8] with provider-side monitoring via Windows Performance Counters. The consumer-side monitoring approach is based on low-level TCP packet capturing and analyzing of several service invocations which was requested from the monitored service. Similar of our approach, windows component as windows performance counters is used for provider-side monitoring of nondeterministic QoS attributes. In their provider-side monitoring approach, system performance counters such as Call Duration, Calls Per Second and Call Failed Per Second used for evaluation of service quality attributes. Additionally, [4] state that Throughput and Calls Per Second seem to refer to the same QoS attribute and they used the system counter as Calls Per Second for measuring throughput. However, using predefined system performance counters has some limitations and mapping system counter values to QoS value is not proper in all cases. For instance, Calls Per Second can not exactly map to throughput because counter value depicts the number of calls to the service per second and throughput is number of service requests that the service provider can be served in a second or given unit in time. In contrast, CWPC are defined and used in our proposed monitoring mechanism for measuring QoS of web services in order to solve current limitation.

In this section, a number of previous works on monitoring of web services are presented. Different techniques are used for monitoring of web services in order to provide accurate and upto-date QoS information. Li et al. [2] design an Automatic Web Services Testing Tool (AWSTT) that composed components as recorder, script generator, system configurator, monitor, and runtime engine for testing and monitoring web services. They described testing methods from different aspects that includes performance testing, load or stress testing and authorization testing. Their proposed automatic testing tool which is based on tracking extended SOAP messages and analysis of related log files. However, they do not give any details of AWSTT implementation and don not provide any information about extension of SOAP or how they analyze log files. A Data Mart approach is proposed for monitoring web services and evaluation of QoS attributes [1]. In their approach,

257

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

III.

CWPC MONITORING MECHANISM

WPC provides a set of counters which developers and administrators can use to track the performance measurement. WPC can be used for providing information as to how well the operating system or an application, service, or driver is performing. Moreover, performance counter framework can be extended for customized counters. Custom performance counter can b e created once and they remain like system performance counters in the local system. In this study, custom windows performance counters are defined in order to measure nondeterministic QoS attributes. In CWPC monitoring approach, the first step is defining and creating counters to windows. In the next step, the value of counters are initialized and set to change and increment during execution of the web service. After setting counters, the values of counters are scheduled to log during monitoring which is necessary for further analyzing of the measured data. Finally, the performance log that contains measured data is analyzed for calculating QoS values of the web service. A. Defining CWPC Counters For our approach, CWPC are defined to windows in order to measure nondeterministic QoS attributes of a web service at runtime. For our monitoring approach, two nondeterministic QoS attributes such as response time and throughput are measured. Response Time: required time for completing a service request. It is also referred to execution duration of service. Throughput: number of service requests that service provider can serve in a second.

change and increment values of WSQoS counters dynamically. The following figure presents the sample of web service method which incremental codes of different WSQoS counters were applied to it.

CWPC Counters are instantiated with the new counter and category definitions. In order to define CWPC, a category should create firstly and then one or more counters can be specified to include in the defined category. We defined a new WSQoS category which has a set of counters for monitoring of web services. ResponseTime counter was created as a type of AverageTimer32 that measures the average time required to execute a web service. The response time counter is calculated by the ratio of the total time elapsed and the number of service requests that completed during that time. AverageTimer32 type of counter comes along with the base counter as AverageBase which counts the number of service requests completed during the elapsed time. Finally, throughput counter was defined as RateOfCountsPerSecond32 that tracks down the number of web service that are served by service provider per second. B. Adjusting Counters The benefit of using system performance counters is that, their value changes automatically whereas the value of CWPC should be set to change by an application. As a result, WSQoS counters values are initialized and set to zero by setting their RawValue property as they are defined. In addition, WSQoS counters values should be adjusted to increment by execution of the monitored web service. The setting of WSQoS counters can be applied in web service implementation code. Incremental codes can be embedded in web service method to

Figure 1.

Pseudo-code of counters increment

Figure 1 shows that the service method operation executes at line 4 and the time is measured before and after the execution by QueryPerformanceCounter at line 3 and 5. The response time is the required time for completing a service request and it is also referred to execution duration of service. As a result, the ResponseTime counter is incremented by the amount of time between starting time and ending time of the main operation of method execution. The ResponseTime counter is incremented when the service method operation executed completely at line 6 and 7. Throughput is the number of service requests that service provider can serve in a second. Therefore, Throughput is incremented regardless of that, the method executed successfully or it was failed which is presented at line 9. C. Scheduling WSQOS Logs The WPC is used by the System Monitor and Performance Logs and Alerts. WSQoS counters can be tracked at run time and their data display graphically by the System Monitor application. Additionally, performance logs and alerts provide logs and alerts for counters. Performance logs are used for recording data of specific counters. The benefit of using logs is enabling to capture counter data for later analyzing. Counter

258

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

log files can be built on a regular schedule and they can be arranged for automatic logging process. The monitoring of web services performs based on the defined schedule. The system monitor utility is the main tool for monitoring system performance. System monitor can track different WSQoS counters in real time. This utility uses a graphical display to view current, or log data. System monitor provides a series of different views such as graph, report or histogram style. The chart views enable to graphically view monitored counters over a short period of time like every second.

Figure 2.

Graph view of system monitor

Figure 2 demonstrate a sample graph view of system monitor. WSQoS counters values displayed dynamically as they were collecting data and the real-time results were presented in graphs. Tracking measured data of counters in real time is profitable but it is not a practical way for data analyzing. Logging data of counters is more effective way which enables to analyze data later. To achieve that, performance logs can be employed to record the measured data of counters. The performance logs and alerts along with their scheduling which is an important factor in monitoring process are described. Performance logs and alerts enable recording of WSQoS counters values into a log file. The log file contains measured data from WSQoS counters values which would be used for calculating QoS values of a web service.

Figure 3 depicts a sample of WSQoS counters log as a text file which contains WSQoS counters values and also the captured date and time. A Counter Log is a definition of which performance counters to log, how often to log the counter values, where to write the counter log, and how long the counter log should run. Data of counter logs can be saved as files that are easily viewed with Excel and also can be exported to spreadsheets and databases for future analyzing and reporting. Another important aspect of counters log is their scheduling. The system administrator of the service provider schedules the logging process. The schedule allows to determine what counters should be logged by the system provider. Additionally, it enables to set the sampling or monitoring interval and also starts and stops time of logging process. Monitoring impose overhead on the system because it consumes machine resources. Accordingly, monitoring can affect the performance of service provider since the proposed monitoring approach is provider-side. As a result, some factors should be considered in scheduling to reduce the impact of monitoring on the system. One of the factors that should be considered is minimizing the number of counters for monitoring. The other important factor is setting monitoring intervals. Decrease of monitoring interval for maximum data collection and more often sampling lead to consume more machine resources. As a consequence, short monitoring interval may cause to decrease of the service provider performance. The monitoring interval should be adjusted appropriately for the system. The effects of different monitoring intervals on a performance of service provider and QoS of the provided web service are investigated in the experiments and results section. Besides, the examination of using different monitoring intervals that is related to monitoring overhead is beneficial for discovering suitable monitoring interval for the system. D. Analyzing Performance Logs The final step is to calculate QoS of web services from performance logs by using analysis techniques. For our approach, simple analysis techniques are used as calculating the average response time and throughput. In this study, QoS attributes such as response time and throughput are considered. However, other QoS attributes such as availability can be calculated by further analyzing of the counter log. The main goal of our monitoring approach is to measure nondeterministic QoS attributes of web services. However, further analytical processing can be done on performance logs that contain data of counters in different time intervals during the measurement time. For instance, performance logs can be analyzed in order to find the period of maximal service requests and service invocations which the service provider is usually used extensively. The result can be applied in web service selection process to prevent selection of the service provider during its peak time mode.

Figure 3. WSQoS log file

259

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

IV.

RESULTS AND DISCUSSIONS

The In order to evaluate and analyze the proposed CWPC monitoring mechanism, a model was implemented by using ASP.NET v.3.5, SOAP SDK 3.0 and UDDI SDK 2.0 beta. The model was hosted on a MS Windows Server 2003, Pentium 4 at 2.41GHz using 1GB RAM. WSQoS counters as custom windows performance counters were defined and scheduled to log counter data. Besides, software wrapping technique was used for setting WSQoS counters values. A local web service was built so that has a configurable execution time. In order to simulate a real web service that is typically used, 1 second was set for method execution duration of the web service and the probability of failure for web service execution is 10%. The Bernoulli distribution was adopted for generating service execution failure in order to simulate success and failure of a service in a time interval. Moreover, a client program was run for simulation of web service consumers which uses multi threading to simulate different web service requests rates. The goal of this experiment is to investigate the effect of CPWC monitoring on nondeterministic QoS values and discover an appropriate monitoring interval for the system. The monitoring interval should be chosen based on different criteria that will be discussed in discussion section. To achieve the goal, nondeterministic QoS as response time and throughput were measured by monitoring the web service. The web service was monitored at provider-side for 2000 service requests. Three different request rates as high, intermediate and low are considered in the experiment. Request rates are constant and they were adopted as high for about 20 req/sec, intermediate for 10 req/sec and Low for 5 req/sec. The WSQoS counters values were logged until all requests of service consumers were processed by the service provider. The following two figures show the average QoS values for three service request rates (5,10 and 20) by using different monitoring intervals (4, 8, 12, 16 and 20 seconds). The web service was monitored for each request rate until processing of 2000 service requests was terminated. Figure 4 presents the average response time in seconds and Figure 5 demonstrates the average throughput in transactions per second.

The most important QoS parameter from the service consumer perspective is response time. We observe from Figure 4 that average response time for 20 service request rates are significantly longer than other two groups (5 and 10 req/sec). As observed, response time of the web service increases as the number of service requests is incremented. Therefore, high volume of service requests affects the service provider performance. The result indicates the importance of load balancing which balances the workload among the same services from different service providers. Besides, Figure 4 shows that the response time decrease for 10 and 20 service request rates when the monitoring interval is extended. The CWPC monitoring at provider-side consumes system resources therefore using longer monitoring interval consumes less machine resources and the monitoring has lower overhead on the system. As a result, long monitoring intervals have less effect on response time in compare to short monitoring intervals. However, the average response time for low request rate (5 request rate) does not change for various monitoring intervals. Additionally, the average response time values for 10 request rate does not have much difference for 12 seconds and higher monitoring intervals and it seems that it became steady. Thus, monitoring does not have significant overhead for low request rates and also in normal working state of the service provider.

5 req/sec

10 req/sec

20 req/sec

25 Average Throughput (tps) 20 15 10 5 0 4 8 12 16 Monitoring Interval (s)


Figure 5. Average throughput

20

5 req/sec

10 req/sec

20 req/sec

Average ResponseTime (s)

7 6 5 4 3 2 1 0 4 8 12 16 20 Monitoring Interval (s)


Figure 4. Average response time

Throughput is the number of completed requests over a time interval (Per Second). Figure 5 depicts throughput values significantly increase for 20 request rate as the monitoring interval is extended. As mentioned before, monitoring degrades the provider performance. For this reason, longer monitoring interval has less overhead on the service providers and more services could be served at per unit of time. Results indicate the importance of choosing proper monitoring interval for the monitoring approach. Monitoring intervals should be selected based on different parameters such as service request rates and the average response time of service. The service is considered to perform well when its throughput is high and it has a faster response time. As monitoring has overhead on service providers, longer

260

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

monitoring interval for high service consumers request rates and peak working time mode of service providers is preferable. However, measure values with minimum monitoring intervals are more precise and accurate. In consequence, a compromise must be found between the performance of service providers and freshness of measured data. By the way of illustration, in low request rate such as 5 requests per second with considering minimum monitoring interval as 4 second, monitored values would be recorded for every 20 invocations. Whereas, for 20 second monitor interval after 100 invocations recording of monitored values would be performed. Accordingly, measured data has higher accuracy when monitoring interval is cut down. In low request rate (5 requests per second), monitoring has not notable overhead on the service provider and values of performance quality attributes such as response time and throughput do not change for various monitoring intervals. As a result, the monitoring interval could be cut down to 4 second for low service request rates. Considering of the 10 request rate, average QoS values likely become steady after 12 second monitoring interval. It seems monitoring process does not considerably affect the performance of the service provider for 12 second and longer monitoring intervals therefore longer than 12 second monitoring interval is preferable in this case. In case of the 20 request rate, average monitored values continuously changes by various monitoring intervals. Because of this, monitoring with 20 second or shorter monitoring interval has notable effect on performance of the service provider. As a consequence, in this case longer than 20 second monitoring interval is suggested to be chosen for the system. Based on results of experiment, minimum monitoring interval is preferable for normal working time mode of the service provider and the data gathering can be performed frequently at short intervals. Whereas, longer monitoring interval for high volume of service request rates and peak working time mode of the service provider is recommended in order to reduce overhead of monitoring process on the system. V. CONCLUSION

VI.

FUTURE WORK

In this work, custom performance counters are used for measuring QoS attributes such as response time and throughput. However, counters can be extended for measuring other nondeterministic QoS attributes such as reliability and availability that is one of our future works. In addition, various parameters can be considered for determining an acceptable monitoring interval such as average service request rate and average response time of the service. Considerably, more work will need to be done in order to find out the most significant parameters that should be taken in to account for selecting adequate monitoring interval. Moreover, a formulation can be defined based on these parameters to suggest the monitoring interval for a system. REFERENCES
[1] da Cruz, S. M. S., Campos, L. M., Campos, M. L. M., & Pires, P. F. A ,data mart approach for monitoring Web services usage and evaluating quality of services, Proceedings of the Twenty-Eighth Brazilian Symposium on Databases, 2003. [2] Li, Y., Li, M., & Yu, J., Web Services Testing, the Methodology, and the Implementation of the Automation-Testing Tool, Lecture Notes in Computer Science, 2004, pp. 940-947. [3] Liu, Y., Ngu, A. H., & Zeng, L. Z., QoS computation and policing in dynamic web service selection, Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, 2004. [4] Michlmayr, A., Rosenberg, F., Leitner, P., & Dustdar, S., Comprehensive QoS Monitoring of Web Services and Event-Based SLA Violation Detection, Paper presented at the International Middleware Conference USA, 2009. [5] Org, U., UDDI technical white paper, White Paper, 2000. [6] Raimondi, F., & Emmerich, W., Efficient online monitoring of webservice SLAs, Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, 2008, pp. 170-180. [7] Ran, S., A model for web services discovery with QoS, ACM SIGecom Exchanges, vol. 4(1), 2003 , pp. 1-10. [8] Rosenberg, F., Platzer, C., & Dustdar, S., Bootstrapping performance and dependability attributes of web services, Proceedings of the IEEE International Conference on Web Services (ICWS06), Chicago, USA, 2006. [9] Thio, N., & Karunasekera, S., Automatic measurement of a QoS metric for Web service recommendation, Proceedings of the Australian Software Engineering Conference, 2005, pp. 202-211. [10] [20] W3C: Web Services Description Language (WSDL), 2001, Version 1.1. W3C Standard, http://www.w3.org/TR/wsdl [11] [21] W3C: SOAP Version 1.2 Part 0-2. W3C Working Draft, http://www.w3.org/TR/s

CWPC monitoring mechanism for measuring QoS attributes of web services was proposed. The steps of CWPC monitoring approach for measuring QoS of web services are: defining counters to windows, adjusting the value of the counters, scheduling performance logs and at last, analyzing performance logs. Each step of monitoring was described in detail and we demonstrated applying performance counters along with performance logs and alerts facilitate the measurement process of QoS attributes. The results of monitoring via performance counter for measuring QoS attributes such as response time and throughput were presented. The findings indicates that adjusting proper monitoring interval is an important issue and different criteria such as service execution time and the average request rate of the service should be considered to reduce the overhead of monitoring process on the provider system.

Bahareh Sadat Arab received her B.Sc in software computer engineering field in 2006 from Iran Azad University. She is currently a master student in University Putra Malaysia. Her research interest includes web services and QoS in SOA.

261

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 3, 2010

Abdul Azim Abdul Ghanis received his M.S. degrees in Computer Science from University of Miami, Florida, U.S.A in 1984 and Ph.D. in Computer Science from University of Strathclyde, Scotland, U.K in 1993. His research areas include software engineering, software metric and software quality. He is now a full-time lecturer in Department of Information System and Dean of the Faculty of Computer Science and Information Technology, University Putra of Malaysia. He has published a number of papers related to software quality areas. Rodziah Binti Atan received her M.S. degree in Computer Science in 2001 and PhD in Software Engineering in 2006 from University Putra Malaysia. Her research areas include software process modeling, software measurement. She is now a fulltime lecturer in Department of Information System.

262

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

You might also like