Thursday, October 31, 2019

Business ethic cases Essay Example | Topics and Well Written Essays - 500 words - 2

Business ethic cases - Essay Example At which point, he reveals his own convictions on the matter. Maitland begins by presenting the case made by the critics against the corporations. As stated in the article companies have been accused of pursuing cheap labor all over the world to: get away with paying workers less than living wages; take advantage of child labor; avoid human rights abuse issues; indirectly supporting repressive regimes that denying workers the right to join unions and do not enforce minimum labor standards in the workplace, etc. He describes how the campaign against international sweatshops was exposed on the television, forcing the publicity- shy retail giants to take the defensive1. For example Maitland explains how Charles Kernaghan, who runs the National Labor Coalition (NLC), brought attention to the fact that Kathie Lee Giffords clothing line was being made by 13- and 14- year- olds working 20- hour days in factories in Honduras, and also arranged for teenage workers from Central American sweatshops to testify before Congressional committees about abusive labor practices. Kernaghan, went on to deliver a masterstroke, when one of the workers held up a Liz Claiborne cotton sweater identical to ones she had sewn since she was a 13- year- old working 12 hours days, during one of these hearings.2 Maitland notes that this incident had an extremely damaging effect on the companies that held their public images to be sacred. The media had a field day withimage of young exploited girl displaying the Claiborne logo and making accusations of oppressive conditions at the factory.3 Consequently, the companies for whom their image was sacrosanct and meant everything to them, petitioned for peace to protect their image, which they deem to be their most valuable asset.4 The companies adopted certain codes of conduct on human and labor rights in their international operations to ensure appropriate levels of pay and safety standards in sweatshops operated by them.

Tuesday, October 29, 2019

A Research Proposal on the Role of the Chief Executive Officer Essay Example for Free

A Research Proposal on the Role of the Chief Executive Officer Essay Questions One of the major problems in business is the notion of whether companies should be concerned with other issues than profitability. Adam Smith in 1863 claimed that the process of achieving the overall good for the society is something that will happen inevitably happen because of his idea of the invisible hand of the market. However, more contemporary ideas assert otherwise as they believed that there are a number of conditions that would hinder the invisible hand to work effectively (Mohr and Webb, 2002). The concept of Corporate Social Responsibility (CSR) is deeply rooted on the commitment of organizations to continue their business in an ethical manner. It is in this respect that organizations are said to necessarily contribute to the overall economic development of ones country while at the same time improving the quality of life of not only its employees and its families but also the society where it belonged (Watts and Holme, 1999). One of the central concerns with regard to the necessity of the implementation of CSR by a particular organization is the impact of a particular company’s decisions and actions within the society together with their responsibility in the aforementioned. As such this means that when aligning certain organizational goals or projects, it would be better if organizations will evaluate first their actions and make sure that they are in accordance to the welfare of the greater good (Parsons, 1954). As such, the impact of an organizations decision within the society is very vital in CSR. It should be emphasized as well that an organizations duty should span more than the economic and legal aspects but also assume the good of the majority. Archie Carroll said that an organizations social responsibility is something that includes the interplay of four important factors. These are economic performance, adherence with the law, ethical responsibility, good corporate citizenship, and improving the societys quality of life (Carrol and Buchholtz, 2003). However despite companies claim for CSR implementation, a significant number of evidence tells that every year, there are numerous companies that are charged for violating environmental laws (Kassinis and Panayiotou, 2006, p. 68). Problems The success of a company’s CSR and the its effect on the company’s image has been viewed by a number of studies to be directly correlated on the role of the Chief Executive Officer (CEO) (PR News, 2007). It has been said by Kassinis and Panayiotou (2006) that the role of the CEO is very vital since they are the ones who are primarily responsible for the boards decision-management functions and the even the extent of corporate wrongdoing. The interpretation of the CEO in terms of various environmental issues that could have affect their firms and its choice of environmental strategies have also a significant implication on the overall image and performance of a particular company. Studies such as those conducted by PR News Wire in 2008 claimed that to belong to Fortune’s Worlds Most Admired Companies, the CEOs role together with his or her capacity to create a strategy or hire specific experts who can effectively handle CSR concerns, such as hiring a competitive Chief Communications Officer (CCO,) is very vital. Companies which belong to Fortunes annual awards are often evaluated based on their reputation. According to PR News in 2007, it is often the case that CEOs are the ones who are held accountable in terms of the failure to protect the company image in whenever a crisis arises. The study of PR News revealed that in out of 950 global business executives in 11 countries, 68% of the results attributed unethical behavior to the CEO, and 60% cited environmental violations and product recalls to the CEO as well. The influence of the perceptions of various stakeholders, regulators, communities and employees has been viewed by Kassinis and Panayiotou (2006) as critical to the welfare of the firm as they are centrally involved in enforcing the laws and other policies that companies must adhere to. Figure 1: Relationship Between CSR and Stakeholders Source: Tokoro (2007) The figure above shows the direct relationship of stakeholders to CSR in terms of the restrictions that they impose, the resource deals that they pass and the overall value creation of the organization. Gap in Research Even if the claim on the role of CEOs in terms of dealing with issues of CSR and company reputation, other studies suggests that CSR strategies and policies are instead delegated to the shareholders (Kassinis and Panayiotou, 2006, p. 67). It is often the case that the demands of the shareholders are oftentimes in conflict with the interest of customers, suppliers, governments, unions, competitors, local communities, and the general public (Sims, 2003, p. 40). The table below shows an overview of perceptions of Table 1: Stakeholders View of Corporate Responsibility Stakeholders Nature of Stakeholder Claim Shareholders Participation in distribution of profits, additional stock offerings, assets on liquidation; vote of stock; inspection of company books; transfer of stock; election of board of directors; and such additional rights as have been established in the contract with the corporation. Employees Economic, social, and psychological satisfaction in the place of employment. Freedom from arbitrary and capricious behavior on the part of company officials. Share in fringe benefits, freedom to join union and participate in collective bargaining, individual freedom in offering up their services through an employment contract. Adequate working conditions. Customers Service provided with the product; technical data to use the product; suitable warranties; spare parts to support the product during use; RD leading to product improvement; facilitation of credit. Creditors Legal proportion of interest payments due and return of principal from the investment. Security of pledged assets; relative priority in event of liquidation. Management and owner prerogatives if certain conditions exist with the company (such as default of interest payments). Suppliers Continuing source of business; timely consummation of trade credit obligations; professional relationship in contracting for, purchasing, and receiving goods and services. Unions Recognition as the negotiating agent for employees. Opportunity to perpetuate the union as a participant in the business organization. Competitors Observation of the norms of competitive conduct established by society and the industry. Business statesmanship on the part of peers. Governments Taxes (income, property, and so on); adherence to the letter and intent of public policy dealing with the requirements of fair and free competition; discharge of legal obligations of businesspeople (and business organizations); adherence to antitrust laws. Local communities Place of productive and healthful environment in the community. Participation of company officials in community affairs, provision of regular employment, fair play, reasonable portion of purchases made in the local community, interest in and support of local government, support of cultural and charitable projects. The general public Participation in and contribution to society as a whole; creative communications between governmental and business units designed for reciprocal understanding; assumption of fair proportion of the burden of government and society. Fair price for products and advancement of the state-of-the-art technology that the product line involves. Source: Sims, 2003, p. 41 For instance, consumers expect that the company should be able to carry out their business in a responsible manner; on the other hand, stakeholders expect that their investments would be returned. In other instances, customers are looking forward a return on what they paid for, while suppliers look for dependable buyers. The government wanted companies to follow legislations, while unions seek benefits for their members. The competitors, expected companies to do their business in a fair manner and local communities wanted the aforementioned to be responsible citizens. Finally, the general public expects organizations to improve the over all quality of human life, while shareholders might view this proposition as utopian (Sims, 2003). The figure below shows the dynamics of stakeholder interactions. Figure 2: Value Creation Through Dialogue with Stakeholders Source: Tokoro (2007) As such, it is in this respect that it could be said that CEOs and a particular company’s responsiveness to the demands of CSR and eventually creating a strong image is something could not be the sole determining factor for a successful CSR. Instead, the question of whether CEOs are only implementing the demands of the company’s stakeholders, or the CEOs only attending to consumer, suppliers, government, community and general public demand should also be taken into close consideration. Deficiency As most researches often attribute the success or failure of a CSR strategy to the CEO, the role and influence of other stakeholders in the organization are not often viewed as significant variables worthy of consideration. Only most recent researches are significantly attributing stakeholder roles in terms of its relationship to CSR. Albeit, based from the researcher’s survey of various secondary data, there are hardly any robust literatures stating the influence of stakeholders to the CEO and eventually the latter’s decision on how to implement its CSR program. Purpose The study is vital in order to not only contribute to the existing studies on the role of CEOs and a successful CSR program; but also to further strengthen the claim on the relationship of CSR and a favorable company image. More importantly, subtle factors that might have influenced CEO decisions, strategies and policies such as those coming from company stakeholders will be taken into close consideration and in addition, will be taken as important variables for the research. Although studies on the direct relationship of company stakeholders and CSR and presented by various researches, the role of the stakeholders in terms of influencing the CEO in its CSR decisions are seldom taken into consideration. As such it is with this respect that the research seeks to significantly contribute to the scholarly studies devoted in order to analyze such dynamics. Research Questions Main Question For the purpose of this research, the study wanted to know: What is the role of the CEO in terms of promoting the Corporate Social Responsibility (CSR) programs of their organization and its relation to building a favorable image? Subquestions Specifically, the research wanted to know: 1. What is the relationship between a successful CSR program and the role of the CEO? 2. What is the relationship between a successful CSR program and a favorable brand image? 3. What is the role of the following in terms of influencing the CSR strategies of a particular organization: a. Shareholders b. Consumers c. Suppliers d. General Public 4. How did company shareholders, consumers, suppliers and the general public influence the strategy of the CEO in terms of implementing its CSR program? Methodology Research Tradition For the purpose of this research, the study will be employing both quantitative and qualitative research methods. Â  It is often the case that quantitative research employs the method that is based on testing of theories. It uses measurement of numbers, and statistical analysis to perform its studies. The idea behind quantitative research is often to ascertain that a generalized theory or the prediction of a theory will be confirmed by the use of numbers. The aforementioned normally starts with a research question or a hypothesis in addition to other theories that are needed to be tested. The approach of quantitative research includes the use of formal and generally recognized instruments (O’brien 1998). In addition to this, the quantitative tradition of research focuses on conducting experiments with an underlying expectation that a consensus would be arrived at. This method usually aims to arrive at a predictable generalization, and a causal explanation. Quantitative research can create a controlled environment in order to attain inductive analysis. The goal of this research tradition is to establish a consensus by reducing data to numerical indications, hence finally identifying if certain generalizations are valid or invalid (O’brien 1998). In this research method it is very relevant that the researcher must maintain its independence from the research object; and consequently, the research outcome is expected to be not value affected (O’brien 1998). The quantitative methodology also tests cause and effect by using deductive logic. When done correctly a quantitative research should be able to predict, and explain the theory in question (O’brien 1998). On the other hand, the Qualitative research focuses primarily on words rather than numbers. The main research instrument for such a type of tradition is the process of involvement of the researcher to the people whom he or she studies (Dyamon and Holloway, 2002). In relation with this, the viewpoints of the participants are also taken into much account. The Qualitative research tradition focuses on small-scale studies wherein deep explorations are being conducted in order to provide a detailed and holistic description and explanation of a specific subject matter. Rather than focusing on a single or two isolated variables, the aforementioned takes into account interconnected activities, experiences, beliefs and values of people, hence adopting a multiple dimension for study. This tradition of research is also flexible in a sense that certain factors are being explored due to not necessarily adhering to a strict method of data gathering. It also captures certain processes wherein changes in sequence of events, behaviors and transformation among cultures are closely taken into consideration. More importantly, a qualitative research is normally carried out in venues that are within a respondents natural environment such as schools, offices, homes, etc. This allowed participants to be more at ease and be able to express their ideas freely (Dyamon and Holloway, 2002). Data Gathering The data gathering will consist of secondary and primary data collections. Ghauri, Gronhaug and Kristianslund (1995) emphasized the importance of secondary data collection most especially through desk or library research. Secondary data collection normally includes data that were collected by another researcher or writer. It is often the case that they are lifted from books of recent publications, journals, magazines, newspapers and even trusted websites such as those of private organizations, non-government organizations, government organizations and the likes. The review of related literature will provide a scholarly perspective on the subject matter and at the same time made the researcher aware of both previous and contemporary research on the subject matter. For the purpose of this research, the author will be using scholarly journals and articles, books and magazines specifically focusing on the oil and gas industry; and freight industry in the Middle East, most specifically Turkey. The scholarly literatures will be primarily taken from EBSCO Host, JSTOR and Questia Media America, an exclusive on-line library. For the primary data collection for quantitative data, the study will be conducting surveys among consumers, suppliers and general public using questions of ordinal measurement using Likert scales for General Electric. Surveys include the process of using questionnaires with the aim of making an estimation of the perceptions of the subjects of the study. Surveys are considered advantageous because it could be used to study a huge number of subjects (Ghauri, Gronhaug and Kristianslund, 1995). On the other hand, interviews will be conducted among selected GE shareholders regarding their perception on the role of the CEO and implementation of the company’s CSR. Data Gathering Methods and their Justification For the purpose of this research, the researcher will be using self-administered questionnaires. Self-administered questionnaires often times offer a higher response rate and are also relatively cost effective (Ghauri, Gronhaug and Kristianslund, 1995). Foremost of its advantage rests on the notion that the process of data gathering could be more personal and also the researcher will be able to clarify certain notions that could be unclear in the survey form. However, one distinct disadvantage of such a method is the difficulty of administrating the survey to multiple respondents all at the same time. In addition, the self-administered data gathering could be very time consuming as well. The research will also be conducting an interview in order to collect the qualitative data necessary for the research. Interviews are very relevant most specially in getting data that could be a rich source of information that surveys could not provide (Ghauri, Gronhaug and Kristianslund, 1995). For the purpose of interviewing, various stakeholders from General Electric Corporation will be asked with regard to their perceptions of how GE should be employing its CSR, and their perceptions on the role of the CEO in terms of effectively implementing its CSR and the company’s image. Questionnaire Design The questionnaire design for the survey will be made in a detailed, precise and logical construction of close-ended questions. In addition with this, the questions will also be made in accordance with the research question and the objectives of the research (Oppenheim, 1992). The questions will be formulated using an ordinal scale and will be close-ended in nature. Such is relevant so that respondents would only have to encircle or check the designated number of their corresponding responses (Oppenhein, 1992). In addition to this, close-ended questions are very easy to answer and could enable the researcher create a summated value that could be use for data analysis. The questions that will be used in the interview will be tailored in such a manner that would directly answer concerns that are in accordance of the objectives of the study. The questions for the shareholders will be specifically created in a manner where there will be an open flow of information and exchange of ideas. The details on how consumers, suppliers and general public wanted the company to act together with its policies and possible ethical practices will be included in the survey. In this respect, questions will be formulated with a closed-ended nature. Sampling For the purpose of this research, the researcher will conduct a survey based on simple random sampling (SRS) which will include randomly choosing participants coming from consumers, suppliers and general public. On the other hand, the research will be employing purposive sampling methods in terms of choosing the stakeholders of General Electric who can participate in the study. Target Population According to Ghauri, Gronhaug and Kristianslund (1995) research should cater to a target population that has all the necessary information for the research such as sampling elements, sampling units, and area of coverage. For the purpose of this study, the author is trying to identify the role of consumers, suppliers and the general public. As such, the study will be asking 120 respondents to participate in the survey of which will primarily come from consumers and suppliers of General Electric as well as the general public who are concerned with General Electric and its operations. Reliability and Validity The studys reliability and validity go hand in hand as patterns of measurement are both dependent on the aforementioned (Zikmund, 1994). Reliability primarily focuses on the internal consistency and the repeatability of the variables within the research. On the other hand, validity centers on the correctness and appropriateness of the question that one intends to measure (Ghauri, Gronhaug and Kristianslund, 1995). According to Chisnall (1997), validity is generally considered and established through the relationship of the instrument to the content, criterion or construct that it attempts to measure. A lack of validity can lead to incorrect conclusion. In order to make sure that the instrument that will be used are reliable and valid, the researcher will assure that such is patterned based on the objectives of the study, the secondary data and also on the feedback that was given based on the pilot study that will be conducted. Analysis of Data Data information gathered from the surveys and interviews and secondary data from the other studies found will be used for the analysis that would answer the research question. Charts and comparisons of data will be used as analysis tools. Statistics used will be based on the survey results from the questionnaire made by the researcher. Statistical Products and Service Solutions (SPSS) will also be used to determine the stand of the respondents regarding a particular question formulated in the survey (Griego and Morgan, 2000, p. 2). References Carroll A. and Buchholtz A.K., (2003). Business and Society: Ethics and Stakeholder Management, 5th ed. Mason, O.: South-Western. Chisnall P. M., (1997). Marketing Research, 5ed., Berkshire: McGraw-Hill. Woodruff H. (1995), Services Marketing. London: Pitman Publishing Daymon C. and Holloway I., (2002). Qualitative Research Methods in Public Relations and Marketing Communications. London: Routledge. Ghauri, P., Gronhaug, K. and Kristianslund, I., (1995). Research Methods In Business Studies: A Practical Guide. Great Britain: Prentice Hall. Griego O. and Morgan G. (2000). SPSS for Windows: An Introduction to Use and Interpretation in Research. Mahwah, NJ: Lawrence Erlbaum Associates. Kassinis G. and Panayiotou, A. (2006). Perceptions Matter: CEO Perceptions and Firm Environmental Performance. The Journal of Corporate Citizenship, (23), p. 67. Mohr L.A. and Webb D. J., (2001). Do Consumers Expect Companies to Be Socially Responsible? the Impact of Corporate Social Responsibility on Buying Behavior. Journal of Consumer Affairs. (35) (1). OBrien, Gerard J. (1998) The Role of Implementation in Connectionist Explanation, Psychology, (9) 6, p.3. Oppenhein, A. N, (1992). Questionnaire Design Interviewing and Attitude Measurement. London: Pinter. Parsons, Talcott (1954). Essays in Sociological Theory. Revised Edition. New York: Free Press. PR News Wire (2008). Corporate Communications Officers in Worlds Most Admired Companies Have Longer Tenures, Fewer Rivals and Report to the CEO; New Study Underscores Critical and Evolving Role of the CCO -; Forecasts CCOs Shifting Focus To Reputation, Social Responsibility and Social Media in 2008. Accessed in the PR News Wire Database. PR News (2007). Quick Study: CEOs Bear Responsibility; Customer Relations Is Dysfunctional; Social Media Invades. PR News. Potomac, (63), 9, p. 1 PR News. (2006). Changing Face Of CSR: New Trends Redefine Doing Well By Doing Good. PR News. Potomac, (62) 42, p. 1 Sims, R., (2003). Ethics and Corporate Social Responsibility: Why Giants Fall. Westport, CT: Praeger. Tokoro N (2007). Stakeholders and Corporate Social Responsibility (CSR): A New Perspective on the Structure of Relationships. Asian Business Management, 6 (2), pp.143-162. Watts P. and Holme R. (1999). Meeting Changing Expectations: Corporate Social Responsibility Available: http://www.wbcsd.org/publications/csrpub.htm [accessed 5June 2008]. Zikmund, G. W. (1994). Exploring Marketing Research. Dryden.

Sunday, October 27, 2019

Metrics and Models in Software Testing

Metrics and Models in Software Testing How do we measure the progress of testing? When do we release the software? Why do we devote more time and resources for testing a particular module? What is the reliability of software at the time of release? Who is responsible for the selection of a poor test suite? How many faults do we expect during testing? How much time and resources are required to test a software? How do we know the effectiveness of test suite? We may keep on framing such questions without much effort? However, finding answers to such questions are not easy and may require significant amount of effort. Software testing metrics may help us to measure and quantify many things which may find some answers to such important questions. 10.1 Software Metrics â€Å"What cannot be measured, cannot be controlled† is a reality in this world. If we want to control something we should first be able to measure it. Therefore, everything should be measurable. If a thing is not measurable, we should make an effort to make it measurable. The area of measurement is very important in every field and we have mature and establish metrics to quantify various things. However, in software engineering this â€Å"area of measurement† is still in its developing stage and may require significant effort to make it mature, scientific and effective. 10.1.1 Measure, Measurement and Metrics These terms are often used interchangeably. However, we should understand the difference amongst these terms. Pressman explained this clearly as [PRES05]: â€Å"A measure provides a quantitative indication of the extent, amount, dimension, capacity or size of some attributes of a product or process. Measurement is the act of determining a measure. The metric is a quantitative measure of the degree to which a product or process possesses a given attribute†. For example, a measure is the number of failures experienced during testing. Measurement is the way of recording such failures. A software metric may be average number of failures experienced per hour during testing. Fenton [FENT04] has defined measurement as: â€Å"It is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules†. The basic issue is that we want to measure every attribute of an entity. We should have established metrics to do so. However, we are in the process of developing metrics for many attributes of various entities used in software engineering. Software metrics can be defined as [GOOD93]: â€Å"The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products.† Many things are covered in this definition. Software metrics are related to measures which, in turn, involve numbers for quantification, these numbers are used to produce better product and improve its related process. We may like to measure quality attributes such as testability, complexity, reliability, maintainability, efficiency, portability, enhanceability, usability etc for a software. We may also like to measure size, effort, development time and resources for a software. 10.1.2 Applications Software metrics are applicable in all phases of software development life cycle. In software requirements and analysis phase, where output is the SRS document, we may have to estimate the cost, manpower requirement and development time for the software. The customer may like to know cost of the software and development time before signing the contract. As we all know, the SRS document acts as a contract between customer and developer. The readability and effectiveness of SRS document may help to increase the confidence level of the customer and may provide better foundations for designing the product. Some metrics are available for cost and size estimation like COCOMO, Putnam resource allocation model, function point estimation model etc. Some metrics are also available for the SRS document like number of mistakes found during verification, change request frequency, readability etc. In the design phase, we may like to measure stability of a design, coupling amongst modules, cohesion of a module etc. We may also like to measure the amount of data input to a software, processed by the software and also produced by the software. A count of the amount of data input to, processed in, and output from software is called a data structure metric. Many such metrics are available like number of variables, number of operators, number of operands, number of live variables, variable spans, module weakness etc. Some information flow metrics are also popular like FANIN, FAN OUT etc. Use cases may also be used to design metrics like counting actors, counting use cases, counting number of links etc. Some metrics may also be designed for various applications of websites like number of static web pages, number of dynamic web pages, number of internal page links, word count, number of static and dynamic content objects, time taken to search a web page and retrieve the desired information, similarity of web pages etc. Software metrics have number of applications during implementation phase and after the completion of such a phase. Halstead software size measures are applicable after coding like token count, program length, program volume, program level, difficulty, estimation of time and effort, language level etc. Some complexity measures are also popular like cyclomatic complexity, knot count, feature count etc. Software metrics have found good number of applications during testing. One area is the reliability estimation where popular models are Musas basic executio n time model and Logarithmic Poisson execution time model. Jelinski Moranda model [JELI72] is also used for the calculation of reliability. Source code coverage metrics are available that calculate the percentage of source code covered during testing. Test suite effectiveness may also be measured. Number of failures experienced per unit of time, number of paths, number of independent paths, number of du paths, percentage of statement coverage, percentage of branch condition covered are also useful software metrics. Maintenance phase may have many metrics like number of faults reported per year, number of requests for changes per year, percentage of source code modified per year, percentage of obsolete source code per year etc. We may find number of applications of software metrics in every phase of software development life cycle. They provide meaningful and timely information which may help us to take corrective actions as and when required. Effective implementation of metrics may improve the quality of software and may help us to deliver the software in time and within budget. 10.2 Categories of Metrics There are two broad categories of software metrics namely product metrics and process metrics. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, efficiency, reliability, portability, etc. Process metrics describe the effectiveness and quality of the processes that produce the software product. Examples are effort required in the process, time to produce the product, effectiveness of defect removal during development, number of defects found during testing, maturity of the process [AGGA08]. 10.2.1 Product metrics for testing These metrics provide information about the testing status of a software product. The data for such metrics are also generated during testing and may help us to know the quality of the product. Some of the basic metrics are given as: (i) Number of failures experienced in a time interval (ii) Time interval between failures (iii) Cumulative failures experienced upto a specified time (iv) Time of failure (v) Estimated time for testing (vi) Actual testing time With these basic metrics, we may find some additional metrics as given below: (i) (ii) Average time interval between failures (iii) Maximum and minimum failures experienced in any time interval (iv) Average number of failures experienced in time intervals (v) Time remaining to complete the testing. We may design similar metrics to find the indications about the quality of the product. 10.2.2 Process metrics for testing These metrics are developed to monitor the progress of testing, status of design and development of test cases and outcome of test cases after execution. Some of the basic process metrics are given below: (i) Number of test cases designed (ii) Number of test cases executed (iii) Number of test cases passed (iv) Number of test cases failed (v) Test case execution time (vi) Total execution time (vii) Time spent for the development of a test case (viii) Total time spent for the development of all test cases On the basis of above direct measures, we may design following additional metrics which may convert the base metric data into more useful information. (i) % of test cases executed (ii) % of test cases passed (iii) % of test cases failed (iv) Total actual execution time / total estimated execution time (v) Average execution time of a test case These metrics, although simple, may help us to know the progress of testing and may provide meaningful information to the testers and project manager. An effective test plan may force us to capture data and convert it into useful metrics for process and product both. This document also guides the organization for future projects and may also suggest changes in the existing processes in order to produce a good quality maintainable software product. 10.3 Object Oriented Metrics used in Testing Object oriented metrics capture many attributes of a software and some of them are relevant in testing. Measuring structural design attributes of a software system, such as coupling, cohesion or complexity, is a promising approach towards early quality assessments. There are several metrics available in the literature to capture the quality of design and source code. 10.3.1 Coupling Metrics Coupling relations increase complexity, reduce encapsulation, potential reuse, and limit understanding and maintainability. The coupling metrics requires information about attribute usage and method invocations of other classes. These metrics are given in table 10.1. Higher values of coupling metrics indicate that a class under test will require more number of stubs during testing. In addition, each interface will require to be tested thoroughly. Metric Definition Source Coupling between Objects. (CBO) CBO for a class is count of the number of other classes to which it is coupled. [CHID94] Data Abstraction Coupling (DAC) Data Abstraction is a technique of creating new data types suited for an application to be programmed. DAC = number of ADTs defined in a class. [LI93] Message Passing Coupling. (MPC) It counts the number of send statements defined in a class. Response for a Class (RFC) It is defined as set of methods that can be potentially executed in response to a message received by an object of that class. It is given by RFC=|RS|, where RS, the response set of the class, is given by [CHID94] Information flow-based coupling (ICP) The number of methods invoked in a class, weighted by the number of parameters of the methods invoked. [LEE95] Information flow-based inheritance coupling. (IHICP) Same as ICP, but only counts methods invocations of ancestors of classes. Information flow-based non-inheritance coupling (NIHICP) Same as ICP, but only counts methods invocations of classes not related through inheritance. Fan-in Count of modules (classes) that call a given class, plus the number of global data elements. [BINK98] Fan-out Count of modules (classes) called by a given module plus the number of global data elements altered by the module (class). [BINK98] Table 10.1: Coupling Metrics 10.3.3 Inheritance Metrics Inheritance metrics requires information about ancestors and descendants of a class. They also collect information about methods overridden, inherited and added (i.e. neither inherited nor overrided). These metrics are summarized in table 10.3. If a class has more number of children (or sub classes), more amount of testing may be required in testing the methods of that class. More is the depth of inheritance tree, more complex is the design as more number of methods and classes are involved. Thus, we may test all the inherited methods of a class and testing effort well increase accordingly. Metric Definition Sources Number of Children (NOC) The NOC is the number of immediate subclasses of a class in a hierarchy. [CHID94] Depth of Inheritance Tree (DIT) The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. Number of Parents (NOP) The number of classes that a class directly inherits from (i.e. multiple inheritance). [LORE94] Number of Descendants (NOD) The number of subclasses (both direct and indirectly inherited) of a class. Number of Ancestors (NOA) The number of superclasses (both direct and indirectly inherited) of a class. [TEGA92] Number of Methods Overridden (NMO) When a method in a subclass has the same name and type signature as in its superclass, then the method in the superclass is said to be overridden by the method in the subclass. [LORE94] Number of Methods Inherited (NMI) The number of methods that a class inherits from its super (ancestor) class. Number of Methods Added (NMA) The number of new methods added in a class (neither inherited, nor overriding). Table 10.3: Inheritance Metrics 10.3.4 Size Metrics Size metrics indicate the length of a class in terms of lines of source code and methods used in the class. These metrics are given in table 10.4. If a class has more number of methods with greater complexity, then more number of test cases will be required to test that class. When a class with more number of methods with greater complexity is inherited, it will require more rigorous testing. Similarly, a class with more number of public methods will require thorough testing of public methods as they may be used by other classes. Metric Definition Sources Number of Attributes per Class (NA) It counts the total number of attributes defined in a class. Number of Methods per Class (NM) It counts number of methods defined in a class. Weighted Methods per Class (WMC) The WMC is a count of sum of complexities of all methods in a class. Consider a class K1, with methods M1,†¦Ã¢â‚¬ ¦.. Mn that are defined in the class. Let C1,†¦Ã¢â‚¬ ¦Ã¢â‚¬ ¦.Cn be the complexity of the methods. [CHID94] Number of public methods (PM) It counts number of public methods defined in a class. Number of non-public methods (NPM) It counts number of private methods defined in a class. Lines Of Code (LOC) It counts the lines in the source code. Table 10.4: Size Metrics 10.4 What should we measure during testing? We should measure every thing (if possible) which we want to control and which may help us to find answers to the questions given in the beginning of this chapter. Test metrics may help us to measure the current performance of any project. The collected data may become historical data for future projects. This data is very important because in the absence of historical data, all estimates are just the guesses. Hence, it is essential to record the key information about the current projects. Test metrics may become an important indicator of the effectiveness and efficiency of a software testing process and may also identify risky areas that may need more testing. 10.4.1 Time We may measure many things during testing with respect to time and some of them are given as: 1) Time required to run a test case. 2) Total time required to run a test suite. 3) Time available for testing 4) Time interval between failures 5) Cumulative failures experienced upto a given time 6) Time of failure 7) Failures experienced in a time interval A test case requires some time for its execution. A measurement of this time may help to estimate the total time required to execute a test suite. This is the simplest metric and may estimate the testing effort. We may calculate the time available for testing at any point in time during testing, if we know the total allotted time for testing. Generally unit of time is seconds, minutes or hours, per test case. Total testing time may be defined in terms of hours. Time needed to execute a planned test suite may also be defined in terms of hours. When we test a software, we experience failures. These failures may be recorded in different ways like time of failure, time interval between failures, cumulative failures experienced upto given time and failures experienced in a time interval. Consider the table 10.5 and table 10.6 where time based failure specification and failure based failure specification are given: Sr. No. of failure occurrences Failure time measured in minutes Failure intervals in minutes 1 12 12 2 26 14 3 35 09 4 38 03 5 50 12 6 70 20 7 106 36 8 125 19 9 155 30 10 200 45 Table 10.5: Time based failure specification Time in minutes Cumulative failures Failures in interval of 20 minutes 20 01 01 40 04 03 60 05 01 80 06 01 100 06 00 120 07 01 140 08 01 160 09 01 180 09 00 200 10 01 Table 10.6: Failure based failure specification These two tables give us the idea about failure pattern and may help us to define the following: 1) Time taken to experience ‘n failures 2) Number of failures in a particular time interval 3) Total number of failures experienced after a specified time 4) Maximum / minimum number of failures experienced in any regular time interval. 10.4.2 Quality of source code We may know the quality of the delivered source code after reasonable time of release using the following formula: Where WDB: Number of weighted defects found before release WDA: Number of weighted defects found after release The weight for each defect is defined on the basis of defect severity and removal cost. A severity is assigned to each defect by testers based on how important or serious is the defect. A lower value of this metric indicates the less number of error detection or less serious error detection. We may also calculate the number of defects per execution test case. This may also be used as an indicator of source code quality as the source code progressed through the series of test activities [STEP03]. 10.4.3 Source Code Coverage We may like to execute every statement of a program at least once before its release to the customer. Hence, percentage of source code coverage may be calculated as: The higher value of this metric given confidence about the effectiveness of a test suite. We should write additional test cases to cover the uncovered portions of the source code. 10.4.4 Test Case Defect Density This metric may help us to know the efficiency and effectiveness of our test cases. Where Failed test case: A test case that when executed, produced an undesired output. Passed test case: A test case that when executed, produced a desired output Higher value of this metric indicates that the test cases are effective and efficient because they are able to detect more number of defects. 10.4.5 Review Efficiency Review efficiency is a metric that gives insight on the quality of review process carried out during verification. Higher the value of this metric, better is the review efficiency. 10.5 Software Quality Attributes Prediction Models Software quality is dependent on many attributes like reliability, maintainability, fault proneness, testability, complexity, etc. Number of models are available for the prediction of one or more such attributes of quality. These models are especially beneficial for large-scale systems, where testing experts need to focus their attention and resources to problem areas in the system under development. 10.5.1 Reliability Models Many reliability models for software are available where emphasis is on failures rather than faults. We experience failures during execution of any program. A fault in the program may lead to failure(s) depending upon the input(s) given to a program with the purpose of executing it. Hence, time of failure and time between failures may help us to find reliability of software. As we all know, software reliability is the probability of failure free operation of software in a given time under specified conditions. Generally, we consider the calendar time. We may like to know the probability that a given software will not fail in one month time or one week time and so on. However, most of the available models are based on execution time. The execution time is the time for which the computer actually executes the program. Reliability models based on execution time normally give better results than those based on calendar time. In many cases, we have a mapping table that converts execution time to calendar time for the purpose of reliability studies. In order to differentiate both the timings, execution time is represented byand calendar time by t. Most of the reliability models are applicable at system testing level. Whenever software fails, we note the time of failure and also try to locate and correct the fault that caused the failure. During system testing, software may not fail at regular intervals and may also not follow a particular pattern. The variation in time between successive failures may be described in terms of following functions: ÃŽ ¼ () : average number of failures upto time ÃŽ » () : average number of failures per unit time at time and is known as failure intensity function. It is expected that the reliability of a program increases due to fault detection and correction over time and hence the failure intensity decreases accordingly. (i) Basic Execution Time Model This is one of the popular model of software reliability assessment and was developed by J.D. MUSA [MUSA79] in 1979. As the name indicates, it is based on execution time (). The basic assumption is that failures may occur according to a non-homogeneous poisson process (NHPP) during testing. Many examples may be given for real world events where poisson processes are used. Few examples are given as: * Number of users using a website in a given period of time. * Number of persons requesting for railway tickets in a given period of time * Number of e-mails expected in a given period of time. The failures during testing represents a non-homogeneous process, and failure intensity decreases as a function of time. J.D. Musa assumed that the decrease in failure intensity as a function of the number of failures observed, is constant and is given as: Where : Initial failure intensity at the start of testing. : Total number of failures experienced upto infinite time : Number of failures experienced upto a given point in time. Musa [MUSA79] has also given the relationship between failure intensity (ÃŽ ») and the mean failures experienced (ÃŽ ¼) and is given in 10.1. If we take the first derivative of equation given above, we get the slope of the failure intensity as given below The negative sign shows that there is a negative slope indicating a decrementing trend in failure intensity. This model also assumes a uniform failure pattern meaning thereby equal probability of failures due to various faults. The relationship between execution time () and mean failures experienced (ÃŽ ¼) is given in 10.2 The derivation of the relationship of 10.2 may be obtained as: The failure intensity as a function of time is given in 10.3. This relationship is useful for calculating present failure intensity at any given value of execution time. We may find this relationship Two additional equations are given to calculate additional failures required to be experienced to reach a failure intensity objective (ÃŽ »F) and additional time required to reach the objective. These equations are given as: Where à ¢Ã‹â€ Ã¢â‚¬  ÃŽ ¼: Expected number of additional failures to be experienced to reach failure intensity objective. : Additional time required to reach the failure intensity objective. : Present failure intensity : Failure intensity objective. and are very interesting metrics to know the additional time and additional failures required to achieve a failure intensity objective. Example 10.1: A program will experience 100 failures in infinite time. It has now experienced 50 failures. The initial failure intensity is 10 failures/hour. Use the basic execution time model for the following: (i) Find the present failure intensity. (ii) Calculate the decrement of failure intensity per failure. (iii) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (iv) Find the additional failures and additional execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated using the following equation: (b) Decrement of failure intensity per failure can be calculated using the following: (c) Failures experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (ii) Logarithmic Poisson Execution time model With a slight modification in the failure intensity function, Musa presented logarithmic poisson execution time model. The failure intensity function is given as: Where ÃŽ ¸: Failure intensity decay parameter which represents the relative change of failure intensity per failure experienced. The slope of failure intensity is given as: The expected number of failures for this model is always infinite at infinite time. The relation for mean failures experienced is given as: The expression for failure intensity with respect to time is given as: The relationship for additional number of failures and additional execution time are given as: When execution time is more, the logarithmic poisson model may give large values of failure intensity than the basic model. Example 10.2: The initial failure intensity of a program is 10 failures/hour. The program has experienced 50 failures. The failure intensity decay parameter is 0.01/failure. Use the logarithmic poisson execution time model for the following: (a) Find present failure intensity. (b) Calculate the decrement of failure intensity per failure. (c) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (d) Find the additional failures and additional and failure execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated as: = 50 failures = 50 failures = 0.01/falures Hence = 6.06 failures/hour (b) Decrement of failure intensity per failure can be calculated as: (c) Failure experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (iii) The Jelinski Moranda Model The Jelinski Moranda model [JELI72] is the earliest and simples software reliability model. It proposed a failure intensity function in the form of Where = Constant of proportionality N = total number of errors present i = number of errors found by time interval ti. This model assumes that all failures have the same failure rate. It means that failure rate is a step function and there will be an improvement in reliability after fixing an error. Hence, every failure contributes equally to the overall reliability. Here, failure intensity is directly proportional to the number of errors remaining in a software. Once we know the value of failure intensity function using any reliability model, we may calculate reliability using the equation given below: Where ÃŽ » is the failure intensity and t is the operating time. Lower the failure intensity and higher is the reliability and vice versa. Example 10.3: A program may experience 200 failures in infinite time of testing. It has experienced 100 failures. Use Jelinski-Moranda model to calculate failure intensity after the experience of 150 failures? Solution: Total expected number of failures (N) = 200 Failures experienced (i) =100 Constant of proportionality () = 0.02 We know = 2.02 failures/hour After 150 failures = 0.02 (200-150+1) =1.02 failures/hour Failure intensity will decrease with every additional failure experience. 10.5.2 An example of fault prediction model in practice It is clear that software metrics can be used to capture the quality of object oriented design and code. These metrics provide ways to evaluate the quality of software and their use in earlier phases of software development can help organizations in assessing a large software development quickly, at a low cost. To achieve help for planning and executing testing by focusing resources on the fault prone parts of the design and code, the model used to predict faulty classes should be used. The fault prediction model can also be used to identify classes that are prone to have severe faults. One can use this model with respect to high severity of faults to focus the testing on those parts of the system that are likely to cause serious failures. In this section, we describe models used to find relationship between object oriented metrics and fault proneness, and how such models can be of great help in planning and executing testing activities [MALH09, SING10]. In order to perform the analysis we used public domain KC1 NASA data set [NASA04] The data set is available on www.mdp.ivv.nasa.gov. The 145

Friday, October 25, 2019

The Political Position of the Parliamentarians Essay example -- Histo

In order to ascertain the ways in which the New Model Army (army) influenced the political position of the parliamentarians, this assignment will provide a brief summary describing the establishment of the army. In addition, as the Wars of the Three Kingdoms concerns events in England (with Wales), Scotland and Ireland, it is necessary to consider the army's significance in all three kingdoms. The events surrounding the immediate aftermath of the first civil war, the execution of Charles I, and the role of the Major-generals, will be explored, and primary sources will be used in order to support any key features regarding the army's impact. Anne Laurence and Rachel. C Gibbons (2007), state that the army was created in 1645, and combined various existing units. The formation of the army was a direct consequence of the Self Denying Ordinance (1643). The function of the army was to provide parliament with a more professional and effective force. With the exclusion of Oliver Cromwell (1599-1658), there were no peers or MPs within the army, which meant that promotion was achieved by those who deserved it. The National army did not have the same obligations that featured in the previous provincial forces, and had the first claim on funds from parliament. The Solemn League and Covenant (1643), a settlement from the alliance between Parliament and Scotland, created friction amongst Parliamentarians, and subsequently a division. The two factions that emerged were the Presbyterians and the Independents. Lawrence and Gibbons suggest that, the Presbyterians mainly approved of the alliance with Scotland, and the Independents opposed it. Unorthodox religious forms surfaced and were supported by the army. The lack of censorship, and the ... .... However, there are limitations with it being a regional representation,and therefore, may not have been indicative on a national level. In summary, It can be seen that the army directed the political position of Parliamentarians in various ways, and on numerous occasions. They pushed forward strategies, imposed the will of authority, and were highly involved with the trial of the king. Charles I's execution meant that the Kingdom became a Commonwealth. Cromwell's ventures in Spain demonstrate activity within foreign policy, which highlights the impact of the army was not just confined to the three kingdoms. It could be argued that parliament could not control its own army. After Cromwell's death he nominated his son Richard as successor, who was military inexperienced. The monarchy was restored in 1660, when Charles II accepted the Presbyterian settlement.

Thursday, October 24, 2019

Night By Elie Weisel

To suffer, as defined in the dictionary, means to undergo or feel pain or great distress. Another way to say it is to sustain injury, disadvantage, or loss. And yet another way to definesuffering is to say to endure or be afflicted with something temporary or chronically. If they wereto ask Elie Wiesel what his definition of suffering was, he would have a lot to say and what hetold them would be more horrible than their wildest dreams. It is hard to relate to something ofthe magnitude of Elie’s suffering, without actually being there, but after reading his book I have awhole new understanding and sympathy for the Holocaust victims. Elie’s story took place while he was a very young boy, approximately 14. His friend(town beggar) Moshe, had been somewhat helping with his studies until all the foreigners wereforced to leave the town. Sneaking back in several weeks later Moshe told of the stories that hehad witnessed. They were gruesome accounts of what the Nazi’s were doing to innocentchildren. His stories were payed little attention, but soon the townspeople were being forced toleave and migrate towards ghettos. From there it was just waiting till they were moved by train tothe concentration camps. Once off the train, Elie and his father were separated from Elie’smother and sister, little did he know that he would never see them again. Through bribery andfriendships along the way he managed to stay close to his aging father. Little respect and evenless food was given to the captives while they performed labor intensive tasks in the quarries. During the day work was performed and if anyone was caught doing anything illegal the weremurdered in front of anybody to set an example of what would happen if an escape was tried. Throughout Elie’s horrific ordeal, he would always comment on the night. This wasfitting being the name of the book, but also because that is the time most of us do our reflection. It is time spent alone and it giving a chance to sort out your thoughts and be one with yourself. Nighttime was probably when reality set in. Elie would often compare himself and the othervictims to the trials that Job went through. If you remember, the book of Job did not explain the mystery of suffering but explored the idea of faith in the midst of suffering. It started out as adiscussion between Satan and God on the loyalty of his servants. Satan proposed that if he wereto take away all of Job’s values in life that he would indeed curse the name of God. God agreedto let Job be tested but his live could not be taken from him. So, Satan did take away every thingincluding his family, his house, and all of his livestock. Then to top it he afflicted Job with boilsand sores all over his body. Job had no idea all of this was being done to him but his friendsseemed to think that it was because he had done something wrong and God was punishing him forit. Elie felt the same way but at the end of Job’s story God tries to rectify Job’s life to the statefrom which it came. Elie was not as lucky. Elie’s health was deteriorating but his old father felt it worse. They were bothmalnourished but at Elie’s young age he could hold out a little longer. On January 29 were Elieawoke his father was gone. His father lack of health and old age was his downfall. The death ofhis father made Elie and stronger person with only his own well-being on his mind. He no longerhad to worry if his father was keeping up with the work or that he had enough food. He wasliving for himself. This new focused energy is what kept Elie from dying himself. Not too longafter his father’s death the Allies moved in and Elie and the few remaining prisoners wereliberated. This was a time of joy for some but also a time of sadness in remembrance of all whohad gone before them. Before reading this book I had a somewhat skeptic view of what exactly had taken placeduring the war. While reading this book I believed this man’s testimony 100%. It was beyond mycomprehension how something like this could have and did take place. The only thought that Ihad at the completion of this book was, what about the other victims (non-jews). I guess becausethis was only one man’s story and Jews were the only people he saw so that is what he wroteabout. This book really makes you think about all the freedoms that I (WE) take for grantedeveryday. I have learned to view the Holocaust in an all new perspective.

Tuesday, October 22, 2019

Online Shopping

Online Shopping Computers Have come a long way since the first one was invented in the early 1900's. We currently live in a society where people can do almost anything on the Internet. You can plan your day ahead, check your horoscope, and look up anything that you can think of. The oncoming trend on the Internet now is online shopping. With online shopping you can buy books, clothes, CD's, and even buy a car. There are many benefits to online shopping and many downfalls.There are many stores to shop online at. Among the many are Gap, Amazon Books, Music Boulevard, American Eagle Outfitters, and J.Crew. To online shop all you have to do is follow the instructions that are given on the web page. It is very simple. Some benefits of online shopping are that sometimes you get it at a cheaper price. You can browse everything any store has not just what is at the store you can see back stock and everything.Espaà ±ol: American Eagle OutfittersIn addition, the most important benefit of all in my opinion is you can shop without leaving your house. You can also shop anytime day or night so if you have a busy schedule then you don't have to fit time in to go to the mall, you can shop at your own time.As good as this all sounds there are a few downfalls. One of them is you have to pay for shipping and handling which can cost a pretty penny especially when your only buying a CD you pay $3.00 or $4.00 for this service. Another downfall is you have to use your credit card. Which I'm sorry to say is not fully protected. A lot of companies say that there web sites are protected but there are always...