Sap
Short Description
Sap Information...
Description
SAP SAP PM MM
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Fri, 17 May 2013 04:46:20 UTC
Contents Articles Customer relationship management
1
Enterprise resource planning
3
Product lifecycle management
12
Supplier relationship management
21
Supply chain management
24
Manufacturing
39
List of ERP software packages
43
ABAP
47
SAP AG
58
List of SAP products
67
SAP Knowledge Warehouse
69
SAP HANA
70
SAP Business One
78
ERP system selection methodology
81
SAP ERP
85
SAP R/3
88
SAP for Retail
90
SAP IS-U
92
SAP Logon Ticket
93
SAP NetWeaver
97
SAP NetWeaver Application Server
99
SAP NetWeaver Business Intelligence
101
SAP NetWeaver Master Data Management
104
SAP NetWeaver Portal
105
SAP Business ByDesign
106
SAP Advanced Planner and Optimizer
108
Cloud computing
109
Software as a service
125
SOAP
130
Project management
134
Project planning
147
Aggregate project plan
148
Activity diagram
150
Critical path method
152
Program Evaluation and Review Technique
155
Beta distribution
163
Float (project management)
228
Project management software
230
Comparison of project-management software
232
Project accounting
244
Comparison of accounting software
247
Project manager
255
Project portfolio management
260
Event planning
263
Event scheduling
265
Brainstorming
265
Business intelligence
272
Mind map
280
List of Unified Modeling Language tools
284
Study software
291
List of concept- and mind-mapping software
293
References Article Sources and Contributors
300
Image Sources, Licenses and Contributors
309
Article Licenses License
314
Customer relationship management
Customer relationship management Customer relationship management (CRM) is a model for managing a company’s interactions with current and future customers. It involves using technology to organize, automate, and synchronize sales, marketing, customer service, and technical support.[1]
Types/variations Sales force automation Sales force automation (SFA) uses software to streamline the sales process. The core of SFA is a contact management system for tracking and recording every stage in the sales process for each prospective client, from initial contact to final disposition. Many SFA applications also include insights into opportunities, territories, sales forecasts and work flow automation.[citation needed]
Marketing CRM systems for marketing track and measure campaigns over multiple channels, such as email, search, social media, telephone and direct mail. These systems track clicks, responses, leads and deals.
Customer service and support CRMs can be used to create, assign and manage requests made by customers, such as call center software which help direct customers to agents.[2] CRM software can also be used to identify and reward loyal customers over a period of time.
Appointments Appointment CRMs automatically provide suitable appointment times to customers via e-mail or the web, which are then synchronized with the representative or agent's calendar.[citation needed]
Small business For small businesses a CRM may simply consist of a contact manager system which integrates emails, documents, jobs, faxes, and scheduling for individual accounts.[citation needed] CRMs available for specific markets for professional markets (legal, finance) are frequently touted for their event management and relationship tracking opposed to financial return on investment (ROI).
Social media Social media is the modern form to build customer relationship. Some CRMs coordinate with social media sites like Twitter, LinkedIn, Facebook and Google Plus to track and communicate with customers who share opinions and experiences about their company, products and services.[3] Once you have identified the trends through social media a business can make more accurate decisions on what products to supply to the society.
Non-profit and membership-based Systems for non-profit and membership-based organizations help track constituents, fund-raising, demographics, membership levels, membership directories, volunteering and communications with individuals.[citation needed]
1
Customer relationship management
2
Adoption Issues In 2003, a Gartner report estimated that more than $1 billion had been spent on software that was not being used. According to KEN Insights, less than 40 percent of 1,275 participating companies had end-user adoption rates above 90 percent.[4] Many corporations only use CRM systems on a partial or fragmented basis.[5] [citation needed] In a 2007 survey from the UK, four-fifths of senior executives reported that their biggest challenge is getting their staff to use the systems they had installed. 43 percent of respondents said they use less than half the functionality of their existing system.[6][citation needed]
Market Leaders The CRM market grew by 12.5 percent in 2008, from revenue of $8.13 billion in 2007 to $9.15 billion in 2008.[7] The following table lists the top vendors in 2006–2008 (figures in millions of US dollars) published in Gartner studies.[8][9] Vendor
2008 Revenue 2008 Share (%) 2007 Revenue 2007 Share (%) 2006 Revenue 2006 Share (%)
SAP AG
2,055
22.5 (−2.8)
2,050.8
25.3
1,681.7
25.6
Oracle
1,475
16.1
1,319.8
16.3
1,016.8
15.5
Salesforce.com
965
10.6
676.5
8.3
451.7
6.9
Microsoft CRM 581
6.4
332.1
4.1
176.1
2.7
Amdocs
451
4.9
421.0
5.2
365.9
5.6
Others
3,620
39.6
3,289.1
40.6
2,881.6
43.8
Total
9,147
100
8,089.3
100
6,573.8
100
Trends Many CRM vendors offer subscription-based web tools (cloud computing) and software as a service (SaaS)). Some CRM systems are equipped with mobile capabilities, making information accessible to remote sales staff.[citation needed] Salesforce.com was the first company to provide enterprise applications through a web browser, and has maintained its leadership position.[10][11] Traditional providers have recently moved into the cloud-based market via acquisitions of smaller providers: Oracle purchased RightNow in October 2011[12] and SAP acquired SuccessFactors in December 2011.[13] The era of the "social customer"[14] refers to the use of social media (Twitter, Facebook, LinkedIn, Google Plus, Yelp, customer reviews in Amazon, etc.) by customers. CRM philosophy and strategy has shifted to encompass social networks and user communities. Another related development is vendor relationship management, or VRM which provide tools and services for customers to independently manage their relationship with vendors. VRM development has grown out of efforts by ProjectVRM at Harvard's Berkman Center for Internet & Society and Identity Commons' Internet Identity Workshops, as well as by a growing number of startups and established companies. VRM was the subject of a cover story in the May 2010 issue of CRM Magazine.[15] In 2001 Doug Laney developed the concept and coined the term 'Extended Relationship Management' (XRM).[16] Laney defines XRM as extending CRM disciplines to secondary allies such as government, press, and industry consortia. CRM futurist Dennison DeGregor describes a shift from 'push CRM' toward a 'customer transparency' (CT) model, due to the increased proliferation of channels, devices, and social media.[17]
Customer relationship management
Notes [1] Shaw, Robert, Computer Aided Marketing & Selling (1991) Butterworth Heinemann ISBN 978-0-7506-1707-9 [2] SAP Insider (15 November 2007) Still Struggling to Reduce Call Center Costs Without Losing Customers? (http:/ / www. sdn. sap. com/ irj/ scn/ go/ portal/ prtroot/ docs/ library/ uuid/ e044e180-8375-2a10-a2b2-b5709ea68ccb) [3] DestinationCRM.com (2009) Who Owns the Social Customer? (http:/ / www. destinationcrm. com/ Articles/ Editorial/ Magazine-Features/ Who-Owns-the-Social-Customer-54028. aspx) [4] Jim Dickie, CSO Insights (2006) Demystifying CRM Adoption Rates (http:/ / www. destinationcrm. com/ Articles/ Columns-Departments/ Reality-Check/ Demystifying-CRM-Adoption-Rates-42496. aspx) [5] Joachim, David. "CRM tools improve access, usability." (cover story). B to B 87, no. 3 (March 11, 2002): 1 [6] David Sims, TMC.net (2007) CRM Adoption ‘Biggest Problem’ in 83 Percent of Cases (http:/ / blog. tmcnet. com/ telecom-crm/ 2007/ 11/ 30/ crm-adoption-biggest-problem-in-83-percent-of-cases-wigan-gets-crm-tre. asp) [7] DestinationCRM.com (2009) CRM Market Grows for Fifth Straight Year (http:/ / www. destinationcrm. com/ Articles/ CRM-News/ Daily-News/ CRM-Market-Grows-for-Fifth-Straight-Year-55275. aspx) [10] history of cloud computing (http:/ / www. computerweekly. com/ feature/ A-history-of-cloud-computing|title=A) Computer Weekly: March 2009 [11] Cloud CRM to Work (http:/ / www. pcworld. com/ businesscenter/ article/ 193463/ put_cloud_crm_to_work. html|title=Put) PC World: April, 2010 [12] Buys Cloud-based Customer Service Company RightNow For $1.5 Billion (http:/ / techcrunch. com/ 2011/ 10/ 24/ oracle-buys-cloud-based-customer-service-company-rightnow-for-1-5-billion/ |title=Oracle) Techcrunch: October 24, 2011 [13] Challenges Oracle With $3.4 Billion SuccessFactors Purchase (http:/ / www. businessweek. com/ news/ 2011-12-07/ sap-challenges-oracle-with-3-4-billion-successfactors-purchase. html|title=SAP) Bloomberg Businessweek: December 07, 2011 [15] Destinationcrm.com (http:/ / www. destinationcrm. com/ Issue/ 1776-May-2010. htm) CRM Magazine: May, 2010 [16] (http:/ / blogs. gartner. com/ doug-laney/ files/ 2012/ 02/ ad1074-The-Great-Enterprise-Balancing-Act-Extended-Relationship-Management-XRM. pdf) The Great Enterprise Balancing Act: Extended Relationship Management (XRM), Doug Laney, META Group publication, December 10, 2001
Enterprise resource planning Enterprise resource planning (ERP) systems integrate internal and external management of information across an entire organization—embracing finance/accounting, manufacturing, sales and service, customer relationship management, etc. ERP systems automate this activity with an integrated software application. ERP facilitates information flow between all business functions inside the organization, and manages connections to outside stakeholders.[1] Enterprise system software is a multi-billion dollar industry that produces components that support a variety of business functions. IT investments have become the largest category of capital expenditure in United States-based businesses over the past decade. Enterprise systems are complex software packages that offer the potential of integrating data and processes across functions in an enterprise. The main example is ERP systems. Organizations consider the ERP system their backbone, and a vital organizational tool because it integrates varied organizational systems, and enables flawless transactions and production. However, an ERP system is radically different from traditional systems development.[2] ERP systems can run on a variety of computer hardware and network configurations, typically employing a database as a repository for information.[3]
History Origin of "ERP" In 1990 Gartner Group first employed the acronym ERP[4] as an extension of material requirements planning (MRP), later manufacturing resource planning[5][6] and computer-integrated manufacturing. Without supplanting these terms, ERP came to represent a larger whole, reflecting the evolution of application integration beyond manufacturing.[7] Not all ERP packages were developed from a manufacturing core. Vendors variously began with accounting,
3
Enterprise resource planning maintenance, and human resources. By the mid–1990s ERP systems addressed all core functions of an enterprise. Beyond corporations, governments and non–profit organizations also began to use ERP systems.[8]
Expansion ERP systems experienced rapid growth in the 1990s because the year 2000 problem and introduction of the euro disrupted legacy systems. Many companies took this opportunity to replace such systems with ERP.[9] ERP systems initially focused on automating back office functions that did not directly affect customers and the general public. Front office functions, such as customer relationship management (CRM), dealt directly with customers, or e–business systems such as e–commerce, e–government, e–telecom, and e–finance—or supplier relationship management (SRM) became integrated later, when the Internet simplified communicating with external parties.[citation needed] "ERP II" was coined in the early 2000s.Wikipedia:Avoid weasel words It describes web–based software that provides employees and partners (such as suppliers and customers) with real–time access to ERP systems. The ERP II role expands traditional ERP's resource optimization and transaction processing. Rather than just manage buying, selling, etc.—ERP II leverages information in the resources under its management to help the enterprise collaborate with other enterprises.[10] ERP II is more flexible than the first generation ERP. Rather than confine ERP system capabilities within the organization, it goes beyond the corporate walls to interact with other systems. Enterprise application suite is an alternate name for such systems.
Two-tier enterprise resource planning Two-tier ERP software and hardware lets companies run the equivalent of two ERP systems at once: one at the corporate level and one at the division or subsidiary level. For example, a manufacturing company uses an ERP system to manage across the organization. This company uses independent global or regional distribution, production or sales centers, and service providers to support the main company’s customers. Each independent center or subsidiary may have their own business model, workflows, and business processes. Given the realities of globalization, enterprises continuously evaluate how to optimize their regional, divisional, and product or manufacturing strategies to support strategic goals and reduce time-to-market while increasing profitability and delivering value.[11] With two-tier ERP, the regional distribution, production, or sales centers and service providers continue operating under their own business model—separate from the main company, using their own ERP systems. Since these smaller companies' processes and workflows are not tied to main company's processes and workflows, they can respond to local business requirements in multiple locations.[12] Factors affecting enterprises adopting two-tier ERP systems are the globalization of manufacturing or the economics of sourcing in emerging economies, the potential for quicker and less costly ERP implementations at subsidiaries based on selecting a software product more suited to smaller companies, and any extra effort required where data must pass between the two ERP systems.[13] Two-tier ERP strategies give enterprises agility in responding to market demands and in aligning IT systems at a corporate level while inevitably resulting in more systems as compared to one ERP system used throughout the entire organization.[14]
Integration Organizations perceive ERP as a vital tool for organizational competition, as it integrates dispersed organizational systems and enables flawless transactions and production. ERP vendors traditionally offered a single ERP system. ERP systems suffered from limitations in coping with integration challenges dealing with changing requirements. However, companies preferred to implement an ERP suite from one vendor that incorporated stand-alone point solutions (that once filled feature gaps in older ERP releases) to achieve higher levels of integration and improve customer relationships and the supply chain's overall efficiency. Many companies preferred implementing an ERP
4
Enterprise resource planning suite from one vendor that incorporated stand-alone point solutions. However, though most companies still follow the single source approach, a significant number of firms employ a strategy of “best of breed” ERP to strive for a competitive advantage. ERP vendors began to acquire products, or develop new features comparable to or better than many of the top applications. This helped companies, via single source, maintain or create a competitive advantage based on unique business processes, rather than adopt the same business processes as their competitors. In the following years, integration was a leading investment due to a feature gap and the need to extend and integrate the ERP system to other enterprises or "best of breed" applications. Integration was ranked as one of the leading investments for 2003. Well over 80% of U.S. companies budgeted for some type of integration in 2002, and roughly one-third of U.S. companies defined application integration as one of their top three IT investments in 2003. ERP license revenue remained steady as companies continued their efforts to broadly deploy core applications, and then add complementary features in later phases. Developers now take greater effort to integrate mobile devices with the ERP system. ERP vendors are extending ERP to these devices, along with other business applications. Technical stakes of modern ERP concern integration—hardware, applications, networking, supply chains. ERP now covers more functions and roles—including decision making, stakeholders' relationships, standardization, transparency, globalization, etc.[15]
Characteristics ERP (Enterprise Resource Planning) systems typically include the following characteristics: • An integrated system that operates in real time (or next to real-time), without relying on periodic updates[citation needed]
• A common database, which supports all applications • A consistent look and feel throughout each module • Installation of the system without elaborate application/data integration by the Information Technology (IT) department, provided the implementation is not done in small steps[16]
Functional areas An ERP system covers the following common functional areas. In many ERP systems these are called and grouped together as ERP modules: Financial accounting General ledger, fixed asset, payables, receivables, cash management, financial consolidation Management accounting Budgeting, costing, cost management, activity based costing Human resources Recruiting, training, payroll, benefits, 401K, diversity management, retirement, separation Manufacturing Engineering, bill of materials, work orders, scheduling, capacity, workflow management, quality control, manufacturing process, manufacturing projects, manufacturing flow, product life cycle management Supply chain management Supply chain planning, supplier scheduling, order to cash, purchasing, inventory, product configurator, claim processing Project management
5
Enterprise resource planning Project planning, resource planning, project costing, work break down structure, billing, time and expense, performance units, activity management Customer relationship management Sales and marketing, commissions, service, customer contact, call center support - CRM systems are not always considered part of ERP systems but rather BSS systems . Specifically in telecom scenario Data services Various "self–service" interfaces for customers, suppliers and/or employees
Components • • • • • • •
Transactional database Management portal/dashboard Business intelligence system Customizable reporting Analysing the product External access via technology such as web services Search
• Document management • Messaging/chat/wiki • Workflow management
Best practices Most ERP systems incorporate best practices. This means the software reflects the vendor's interpretation of the most effective way to perform each business process. Systems vary in how conveniently the customer can modify these practices.[17] Companies that implemented industry best practices reduced time–consuming project tasks such as configuration, documentation, testing, and training. In addition, best practices reduced risk by 71% when compared to other software implementations.[18] The use of best practices eases compliance with requirements such as IFRS, Sarbanes-Oxley, or Basel II. They can also help comply with de facto industry standards, such as electronic funds transfer. This is because the procedure can be readily codified within the ERP software, and replicated with confidence across multiple businesses who share that business requirement.[citation needed]
Modularity Most systems are modular, to permit automating some functions but not others. Some common modules, such as finance and accounting, are adopted by nearly all users; others such as human resource management are not. For example, a service company probably has no need for a manufacturing module. Other companies already have a system they believe is adequate. Generally speaking, the greater the number of modules selected, the greater the integration benefits, but also the greater the costs, risks, and changes involved.[citation needed]
6
Enterprise resource planning
Connectivity to plant floor information ERP systems connect to real–time data and transaction data in a variety of ways. These systems are typically configured by systems integrators, who bring unique knowledge on process, equipment, and vendor solutions. Direct integration—ERP systems have connectivity (communications to plant floor equipment) as part of their product offering. This requires the vendors to offer specific support for the plant floor equipment that their customers operate. ERP vendors must be expert in their own products, and connectivity to other vendor products, including competitors. Database integration—ERP systems connect to plant floor data sources through staging tables in a database. Plant floor systems deposit the necessary information into the database. The ERP system reads the information in the table. The benefit of staging is that ERP vendors do not need to master the complexities of equipment integration. Connectivity becomes the responsibility of the systems integrator. Enterprise appliance transaction modules (EATM)—These devices communicate directly with plant floor equipment and with the ERP system via methods supported by the ERP system. EATM can employ a staging table, Web Services, or system–specific program interfaces (APIs). The benefit of an EATM is that it offers an off–the–shelf solution. Custom–integration solutions—Many system integrators offer custom solutions. These systems tend to have the highest level of initial integration cost, and can have a higher long term maintenance and reliability costs. Long term costs can be minimized through careful system testing and thorough documentation. Custom–integrated solutions typically run on workstation or server class computers.
Implementation ERP's scope usually implies significant changes to staff work processes and practices.[19] Generally, three types of services are available to help implement such changes—consulting, customization, and support.[19] Implementation time depends on business size, number of modules, customization, the scope of process changes, and the readiness of the customer to take ownership for the project. Modular ERP systems can be implemented in stages. The typical project for a large enterprise consumes about 14 months and requires around 150 consultants.[] Small projects can require months; multinational and other large implementations can take years.[citation needed] Customization can substantially increase implementation times.[] Besides that, information processing actually has influential effects on various business functional activities—due to severe competitions, taking control of logistics efficiently would be significant for manufacturers. Therefore, large corporations like Wal-Mart use a Just in time (business)}just in time inventory system. This increases inventory storage and delivery efficiency, since it helps avoid wasteful storage days and lack of supply to satisfy customer demand. Moreover, many companies realize that increasing market share requires that they be sensitive to marketing changes and make appropriate adjustments. Lots of information processing applications could meet these requirements, and ERP covers almost every essential functional unit of a firm’s operations—including accounting, financing, procurement, marketing, and sales. This information processing tool becomes the bridge that helps different isolated functional units share and update their data immediately, so managers can continually revise strategies based on data from all departments. However, information tools like ERP are expensive, and not a practical method for medium or small business owners. To address this issue, some software firms develop simpler, cheaper information processing tools specifically for smaller companies.
7
Enterprise resource planning
Process preparation Implementing ERP typically requires changes in existing business processes.[20] Poor understanding of needed process changes prior to starting implementation is a main reason for project failure.[21] It is therefore crucial that organizations thoroughly analyze business processes before implementation. This analysis can identify opportunities for process modernization. It also enables an assessment of the alignment of current processes with those provided by the ERP system. Research indicates that the risk of business process mismatch is decreased by: • Linking current processes to the organization's strategy • Analyzing the effectiveness of each process • Understanding existing automated solutions[22][23] ERP implementation is considerably more difficult (and politically charged) in decentralized organizations, because they often have different processes, business rules, data semantics, authorization hierarchies and decision centers.[24] This may require migrating some business units before others, delaying implementation to work through the necessary changes for each unit, possibly reducing integration (e.g., linking via Master data management) or customizing the system to meet specific needs.[25] A potential disadvantage is that adopting "standard" processes can lead to a loss of competitive advantage. While this has happened, losses in one area are often offset by gains in other areas, increasing overall competitive advantage.[26][27]
Configuration Configuring an ERP system is largely a matter of balancing the way the organization wants the system to work with the way it was designed to work. ERP systems typically include many settings that modify system operation. For example, an organization can select the type of inventory accounting—FIFO or LIFO—to use; whether to recognize revenue by geographical unit, product line, or distribution channel; and whether to pay for shipping costs on customer returns.[25]
Customization ERP systems are theoretically based on industry best practices, and their makers intend that organizations deploy them as is.[28][29] ERP vendors do offer customers configuration options that let organizations incorporate their own business rules, but often feature gaps remain even after configuration is complete. ERP customers have several options to reconcile feature gaps, each with their own pros/cons. Technical solutions include rewriting part of the delivered software, writing a homegrown module to work within the ERP system, or interfacing to an external system. These three options constitute varying degrees of system customization—with the first being the most invasive and costly to maintain.[] Alternatively, there are non-technical options such as changing business practices or organizational policies to better match the delivered ERP feature set. Key differences between customization and configuration include: • Customization is always optional, whereas the software must always be configured before use (e.g., setting up cost/profit center structures, organisational trees, purchase approval rules, etc.). • The software is designed to handle various configurations, and behaves predictably in any allowed configuration. • The effect of configuration changes on system behavior and performance is predictable and is the responsibility of the ERP vendor. The effect of customization is less predictable. It is the customer's responsibility, and increases testing activities. • Configuration changes survive upgrades to new software versions. Some customizations (e.g., code that uses pre–defined "hooks" that are called before/after displaying data screens) survive upgrades, though they require retesting. Other customizations (e.g., those involving changes to fundamental data structures) are overwritten
8
Enterprise resource planning during upgrades and must be reimplemented.[30] Customization advantages include that it: • Improves user acceptance[31] • Offers the potential to obtain competitive advantage vis-à-vis companies using only standard features Customization disadvantages include that it: • Increases time and resources required to implement and maintain[] • Inhibits seamless communication between suppliers and customers who use the same ERP system uncustomized[citation needed] • Can create over reliance on customization, undermining the principles of ERP as a standardizing software platform
Extensions ERP systems can be extended with third–party software. ERP vendors typically provide access to data and features through published interfaces. Extensions offer features such as:[citation needed] • Archiving, reporting, and republishing • Capturing transactional data, e.g., using scanners, tills or RFID • Access to specialized data and capabilities, such as syndicated marketing data and associated trend analytics • Advanced planning and scheduling (APS) • Managing resources, facilities, and transmission in real-time
Data migration Data migration is the process of moving, copying, and restructuring data from an existing system to the ERP system. Migration is critical to implementation success and requires significant planning. Unfortunately, since migration is one of the final activities before the production phase, it often receives insufficient attention. The following steps can structure migration planning:[32] • • • • • •
Identify data to migrate Determine migration timing Generate data templatesWikipedia:Please clarify Freeze the toolset Decide on migration-related setupsWikipedia:Please clarify Define data archiving policies and procedures
Comparison to special–purpose applications Advantages The fundamental advantage of ERP is that integrating myriad businesses processes saves time and expense. Management can make decisions faster and with fewer errors. Data becomes visible across the organization. Tasks that benefit from this integration include:[citation needed] • • • •
Sales forecasting, which allows inventory optimization Chronological history of every transaction through relevant data compilation in every area of operation. Order tracking, from acceptance through fulfillment Revenue tracking, from invoice through cash receipt
• Matching purchase orders (what was ordered), inventory receipts (what arrived), and costing (what the vendor invoiced) ERP systems centralize business data, which:
9
Enterprise resource planning • Eliminates the need to synchronize changes between multiple systems—consolidation of finance, marketing, sales, human resource, and manufacturing applications • Brings legitimacy and transparency to each bit of statistical data • Facilitates standard product naming/coding • Provides a comprehensive enterprise view (no "islands of information"), making real–time information available to management anywhere, any time to make proper decisions • Protects sensitive data by consolidating multiple security systems into a single structure[33]
Benefits • ERP can greatly improve the quality and efficiency of a business. By keeping a company's internal business process running smoothly, ERP can lead to better outputs that benefit the company such as customer service, and manufacturing. • ERP provides support to upper level management to provide them with critical decision making information. This decision support allows the upper level management to make managerial choices that enhance the business down the road. • ERP also creates a more agile company that better adapts to change. ERP makes a company more flexible and less rigidly structured so organization components operate more cohesively, enhancing the business—internally and externally.[34]
Disadvantages • Customization is problematic. • Re-engineering business processes to fit the ERP system may damage competitiveness or divert focus from other critical activities. • ERP can cost more than less integrated or less comprehensive solutions. • High ERP switching costs can increase the ERP vendor's negotiating power, which can increase support, maintenance, and upgrade expenses. • Overcoming resistance to sharing sensitive information between departments can divert management attention. • Integration of truly independent businesses can create unnecessary dependencies. • Extensive training requirements take resources from daily operations. • Due to ERP's architecture (OLTP, On-Line Transaction Processing) ERP systems are not well suited for production planning and supply chain management (SCM). • Harmonization of ERP systems can be a mammoth task (especially for big companies) and requires a lot of time, planning, and money.[35] Recognized ERP limitations have sparked new trends in ERP application development. Development is taking place in four significant areas: more flexible ERP, Web-enable ERP, inter-enterprise ERP, and e-business suites.
10
Enterprise resource planning
References [1] Bidgoli, Hossein, (2004). The Internet Encyclopedia, Volume 1, John Wiley & Sons, Inc. p. 707. [2] SHAUL, L. AND TAUBER, D. 2012. CSFs along ERP life-cycle in SMEs: a field study. Industrial Management & Data Systems, 112(3), 360-384. [3] Khosrow–Puor, Mehdi. (2006). Emerging Trends and Challenges in Information Technology Management. Idea Group, Inc. p. 865. [4] "A Vision of Next Generation MRP II", Scenario S-300-339, Gartner Group, April 12, 1990 [7] Sheilds, Mureell G., E–Business and ERP: . (2005) John Wiley and Sons, Inc. p. 9. [11] Ferdows, K. (1997). "Making the most of foreign factories," Harvard Business Review, 75(2), 73-88. [12] Gill, R. (2011). "The rise of two-tier ERP." Strategic Finance, 93(5), 35-40, 1. [13] Montgomery, Nigel (2010). "Two-Tier ERP Suite Strategy: Considering Your Options." (http:/ / www. gartner. com/ id=1412121) Gartner Group. July 28, 2010. Retrieved September 20, 2012. [14] Kovacs, G. L., & Paganelli, P. (2003). "A planning and management infrastructure for large, complex, distributed projects - beyond ERP and SCM." Computers in Industry, 51(2), 165-165. [15] SHAUL, L. AND TAUBER, D. 2013. Critical Success Factors in Enterprise Resource Planning Systems: Review of the Last Decade. ACM Computing Surveys, 45(4), 35 pages. [16] Sheilds, Mureell G., E-Business and ERP: Rapid Implementation and Project Planning. (2001) John Wiley and Sons, Inc. p. 9-10. [17] Monk, Ellen and Wagner, Brett."Concepts in Enterprise Resource Planning" 3rd.ed.Course Technology Cengage Learning.Boston, Massachusetts.2009 [18] "Enhanced Project Success Through SAP Best Practices – International Benchmarking Study". ISBN 1-59229-031-0. [19] What is ERP?, http:/ / www. tech-faq. com/ erp. shtml [20] Turban et al. (2008). Information Technology for Management, Transforming Organizations in the Digital Economy. Massachusetts: John Wiley & Sons, Inc., pp. 300–343. ISBN 978-0-471-78712-9 [21] Brown, C., and I. Vessey, "Managing the Next Wave of Enterprise Systems: Leveraging Lessons from ERP," MIS Quarterly Executive, 2(1), 2003. [22] King. W., "Ensuring ERP implementation success," Information Systems Management, Summer 2005. [23] Yusuf, Y., A. Gunasekaran, and M. Abthorpe, "Enterprise Information Systems Project Implementation: A Case Study of ERP in Rolls-Royce," International Journal of Production Economics, 87(3), February 2004. [25] Thomas H. Davenport, "Putting the Enterprise into the Enterprise System", 'Harvard Business Review', July–August 1998. [26] Turban et al. (2008). Information Technology for Management, Transforming Organizations in the Digital Economy. Massachusetts: John Wiley & Sons, Inc., p. 320. ISBN 978-0-471-78712-9 [27] Dehning,B. and T.Stratopoulos, 'Determinants of a Sustainable Competitive Advantage Due to an IT-enabled Strategy,' Journal of Strategic Information Systems, Vol. 12, 2003 [35] The Minefied of Harmonising ERP. Retrieved on August 17, 2012. (http:/ / www. cfo-insight. com/ reporting-forecasting/ forecasting/ the-minefield-of-harmonising-erp/ )
Further reading • Grant, David; Richard Hall, Nick Wailes, Christopher Wright (March 2006). "The false promise of technological determinism: the case of enterprise resource planning systems". New Technology, Work & Employment 21 (1): 2–15. doi: 10.1111/j.1468-005X.2006.00159.x (http://dx.doi.org/10.1111/j.1468-005X.2006.00159.x). • Loh, Tee Chiat; Lenny Koh Siau Ching (September 2004). "Critical elements for a successful ERP implementation in SMEs". International Journal of Production Research 42 (17): 3433–3455. doi: 10.1080/00207540410001671679 (http://dx.doi.org/10.1080/00207540410001671679). • Shaul, Levi; Tauber Doron (September 2010). "Hierarchical examination of success factors across ERP life cycle" (http://aisel.aisnet.org/mcis2010/79/). MCIS 2010 Proceedings.: 79. • Head, Simon (2005). The New Ruthless Economy. Work and Power in the Digital Age. Oxford UP. ISBN 0-19-517983-8. • Waldner, Jean-Baptiste (1992). Principles of Computer Integrated Manufacturing. Chichester: John Wiley & Sons Ltd. ISBN 0-471-93450-X. • Waldner, Jean-Baptiste (1990). Les nouvelles perspectives de la production. Paris: DUNOD BORDAS. ISBN 978-2-04-019820-6. • Lequeux, Jean-Louis (2008). Manager avec les ERP, Architecture Orientée Services (SOA). Paris: EDITIONS D'ORGANISATION. ISBN 978-2-212-54094-9. • CIO Magazine's ABCs of ERP (http://www.cio.com/article/40323)
11
Enterprise resource planning • History Of ERP (http://opensourceerpguru.com/2009/02/25/erp-history/) • Clemons, E.K.; Kimborough (1986). "IS for Sustainable Competitive Advantage". Information & Management 11 (3): 131–136. doi: 10.1016/0378-7206(86)90010-8 (http://dx.doi.org/10.1016/0378-7206(86)90010-8). • Henderson, Ian ERP From the Frontline MBE ISBN 978-1-898822-05-9 Making ERP Work (http://www.mlg. uk.com/html/erpfrontline.htm) • Software Advice's 4-Part Series on the History of Enterprise Software (http://blog.softwareadvice.com/articles/ enterprise/software-history-pt1-1082411/) • MRP versus ERP (http://sheetmetalworld.com/sheet-metal-news/ 17-it-for-manufacturing-management-and-production/11086-mrp-versus-erp) • ERP Customization (http://www.erpfocus.com/erp-implementation-customizing-590.html) • ERP in Russia (http://12news.ru/ERP.html) • The History of Double Accounting: How Italian Merchants Led the Way to ERP (http://www. softwarethinktank.com/articles/how-a-bunch-of-italian-merchants-led-the-way-to-erp/)
Product lifecycle management In industry, product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal.[1] PLM integrates people, data, processes and business systems and provides a product information backbone for companies and their extended enterprise.[2] PLM systems help organizations in coping with the increasing complexity and engineering challenges of developing new products for the global competitive markets.[3] Product lifecycle management (PLM) should be distinguished from 'product life cycle management A generic lifecycle of products (marketing)' (PLCM). PLM describes the engineering aspect of a product, from managing descriptions and properties of a product through its development and useful life; whereas, PLCM refers to the commercial management of life of a product in the business market with respect to costs and sales measures. Product lifecycle management can be considered one of the four cornerstones of a manufacturing corporation's information technology structure.[4] All companies need to manage communications and information with their customers (CRM-customer relationship management), their suppliers and fulfillment (SCM-supply chain management), their resources within the enterprise (ERP-enterprise resource planning) and their product planning and development (PLM). One form of PLM is called people-centric PLM. While traditional PLM tools have been deployed only on release or during the release phase, people-centric PLM targets the design phase. As of 2009, ICT development (EU-funded PROMISE project 2004–2008) has allowed PLM to extend beyond traditional PLM and integrate sensor data and real time 'lifecycle event data' into PLM, as well as allowing this information to be made available to different players in the total lifecycle of an individual product (closing the information loop). This has resulted in the extension of PLM into closed-loop lifecycle management (CL2M).
12
Product lifecycle management
Benefits Documented benefits of product lifecycle management include:[5][6][7] • • • • • • • • • • • • • • •
Reduced time to market Increase full price sales Improved product quality Reduced prototyping costs More accurate and timely request for quote generation Ability to quickly identify potential sales opportunities and revenue contributions Savings through the re-use of original data A framework for product optimization Reduced waste Savings through the complete integration of engineering workflows Documentation that can assist in proving compliance for RoHS or Title 21 CFR Part 11 Ability to provide contract manufacturers with access to a centralized product record Seasonal fluctuation management Process stability Improved forecasting to reduce material costs
• Maximize supply chain collaboration
Areas of PLM Within PLM there are five primary areas; 1. 2. 3. 4. 5.
Systems engineering (SE) Product and portfolio m² (PPM) Product design (CAx) Manufacturing process management (MPM) Product Data Management (PDM)
Note: While application software is not required for PLM processes, the business complexity and rate of change requires organizations execute as rapidly as possible. Systems engineering is focused on meeting all requirements, primary meeting customer needs, and coordinating the systems design process by involving all relevant disciplines. Product and portfolio management is focused on managing resource allocation, tracking progress vs. plan for projects in the new product development projects that are in process (or in a holding status). Portfolio management is a tool that assists management in tracking progress on new products and making trade-off decisions when allocating scarce resources.Product design is the process of creating a new product to be sold by a business to its customers. Manufacturing process management is a collection of technologies and methods used to define how products are to be manufactured. Product data management is focused on capturing and maintaining information on products and/or services through their development and useful life.
13
Product lifecycle management
Introduction to development process The core of PLM (product lifecycle management) is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life.[8] It is not just about software technology but is also a business strategy.[9] For simplicity the stages described are shown in a traditional sequential engineering workflow. The exact order of event and tasks will vary according to the product and industry in question but the main processes are:[10] • Conceive • Specification • Concept design • Design • Detailed design • Validation and analysis (simulation) • Tool design • Realize • Plan manufacturing • Manufacture • Build/Assemble • Test (quality check) • Service • • • •
Sell and deliver Use Maintain and support Dispose
The major key point events are: • • • • •
Order Idea Kickoff Design freeze Launch
The reality is however more complex, people and departments cannot perform their tasks in isolation and one activity cannot simply finish and the next activity start. Design is an iterative process, often designs need to be modified due to manufacturing constraints or conflicting requirements. Where a customer order fits into the time line depends on the industry type and whether the products are for example, built to order, engineered to order, or assembled to order.
14
Product lifecycle management
History The inspiration for the burgeoning business process now known as PLM came from American Motors Corporation (AMC).[11] The automaker was looking for a way to speed up its product development process to compete better against its larger competitors in 1985, according to François Castaing, Vice President for Product Engineering and Development.[] After introducing its compact Jeep Cherokee (XJ), the vehicle that launched the modern sport utility vehicle (SUV) market, AMC began development of a new model, that later came out as the Jeep Grand Cherokee. The first part in its quest for faster product development was computer-aided design (CAD) software system that make engineers more productive.[] The second part in this effort was the new communication system that allowed conflicts to be resolved faster, as well as reducing costly engineering changes because all drawings and documents were in a central database.[] The product data management was so effective, that after AMC was purchased by Chrysler, the system was expanded throughout the enterprise connecting everyone involved in designing and building products.[] While an early adopter of PLM technology, Chrysler was able to become the auto industry's lowest-cost producer, recording development costs that were half of the industry average by the mid-1990s.[]
Phases of product lifecycle and corresponding technologies Many software solutions have developed to organize and integrate the different phases of a product’s lifecycle. PLM should not be seen as a single software product but a collection of software tools and working methods integrated together to address either single stages of the lifecycle or connect different tasks or manage the whole process. Some software providers cover the whole PLM range while others single niche application. Some applications can span many fields of PLM with different modules within the same data model. An overview of the fields within PLM is covered here. It should be noted however that the simple classifications do not always fit exactly, many areas overlap and many software products cover more than one area or do not fit easily into one category. It should also not be forgotten that one of the main goals of PLM is to collect knowledge that can be reused for other projects and to coordinate simultaneous concurrent development of many products. It is about business processes, people and methods as much as software application solutions. Although PLM is mainly associated with engineering tasks it also involves marketing activities such as product portfolio management (PPM), particularly with regards to new product development (NPD). There are several life-cycle models in industry to consider, but most are rather similar. What follows below is one possible life-cycle model; while it emphasizes hardware-oriented products, similar phases would describe any form of product or service, including non-technical or software-based products:[12]
Phase 1: Conceive Imagine, specify, plan, innovate The first stage in idea is the definition of its requirements based on customer, company, market and regulatory bodies’ viewpoints. From this specification of the products major technical parameters can be defined. Parallel to the requirements specification the initial concept design work is carried out defining the aesthetics of the product together with its main functional aspects. For the industrial design, Styling, work many different media are used from pencil and paper, clay models to 3D CAID computer-aided industrial design software. In some concepts, the investment of resources into research or analysis-of-options may be included in the conception phase – e.g. bringing the technology to a level of maturity sufficient to move to the next phase. However, life-cycle engineering is iterative. It is always possible that something doesn't work well in any phase enough to back up into a prior phase – perhaps all the way back to conception or research. There are many examples to draw from.
15
Product lifecycle management
Phase 2: Design Describe, define, develop, test, analyze and validate This is where the detailed design and development of the product’s form starts, progressing to prototype testing, through pilot release to full product launch. It can also involve redesign and ramp for improvement to existing products as well as planned obsolescence. The main tool used for design and development is CAD. This can be simple 2D drawing / drafting or 3D parametric feature based solid/surface modeling. Such software includes technology such as Hybrid Modeling, Reverse Engineering, KBE (knowledge-based engineering), NDT (Nondestructive testing), Assembly construction. This step covers many engineering disciplines including: mechanical, electrical, electronic, software (embedded), and domain-specific, such as architectural, aerospace, automotive, ... Along with the actual creation of geometry there is the analysis of the components and product assemblies. Simulation, validation and optimization tasks are carried out using CAE (computer-aided engineering) software either integrated in the CAD package or stand-alone. These are used to perform tasks such as:- Stress analysis, FEA (finite element analysis); kinematics; computational fluid dynamics (CFD); and mechanical event simulation (MES). CAQ (computer-aided quality) is used for tasks such as Dimensional tolerance (engineering) analysis. Another task performed at this stage is the sourcing of bought out components, possibly with the aid of procurement systems.
Phase 3: Realize Manufacture, make, build, procure, produce, sell and deliver Once the design of the product’s components is complete the method of manufacturing is defined. This includes CAD tasks such as tool design; creation of CNC Machining instructions for the product’s parts as well as tools to manufacture those parts, using integrated or separate CAM computer-aided manufacturing software. This will also involve analysis tools for process simulation for operations such as casting, molding, and die press forming. Once the manufacturing method has been identified CPM comes into play. This involves CAPE (computer-aided production engineering) or CAP/CAPP – (production planning) tools for carrying out factory, plant and facility layout and production simulation. For example: press-line simulation; and industrial ergonomics; as well as tool selection management. Once components are manufactured their geometrical form and size can be checked against the original CAD data with the use of computer-aided inspection equipment and software. Parallel to the engineering tasks, sales product configuration and marketing documentation work take place. This could include transferring engineering data (geometry and part list data) to a web based sales configurator and other desktop publishing systems.
Phase 4: Service Use, operate, maintain, support, sustain, phase-out, retire, recycle and disposal The final phase of the lifecycle involves managing of in service information. Providing customers and service engineers with support information for repair and maintenance, as well as waste management/recycling information. This involves using tools such as Maintenance, Repair and Operations Management (MRO) software. There is an end-of-life to every product. Whether it be disposal or destruction of material objects or information, this needs to be considered since it may not be free from ramifications.
16
Product lifecycle management
All phases: product lifecycle Communicate, manage and collaborate None of the above phases can be seen in isolation. In reality a project does not run sequentially or in isolation of other product development projects. Information is flowing between different people and systems. A major part of PLM is the co-ordination and management of product definition data. This includes managing engineering changes and release status of components; configuration product variations; document management; planning project resources and timescale and risk assessment. For these tasks graphical, text and metadata such as product bills of materials (BOMs) needs to be managed. At the engineering departments level this is the domain of PDM – (product data management) software, at the corporate level EDM (enterprise data management) software, these two definitions tend to blur however but it is typical to see two or more data management systems within an organization. These systems are also linked to other corporate systems such as SCM, CRM, and ERP. Associated with these system are project management Systems for project/program planning. This central role is covered by numerous collaborative product development tools which run throughout the whole lifecycle and across organizations. This requires many technology tools in the areas of conferencing, data sharing and data translation. The field being product visualization which includes technologies such as DMU (digital mock-up), immersive virtual digital prototyping (virtual reality), and photo-realistic imaging. User skills The broad array of solutions that make up the tools used within a PLM solution-set (e.g., CAD, CAM, CAx...) were initially used by dedicated practitioners who invested time and effort to gain the required skills. Designers and engineers worked wonders with CAD systems, manufacturing engineers became highly skilled CAM users while analysts, administrators and managers fully mastered their support technologies. However, achieving the full advantages of PLM requires the participation of many people of various skills from throughout an extended enterprise, each requiring the ability to access and operate on the inputs and output of other participants. Despite the increased ease of use of PLM tools, cross-training all personnel on the entire PLM tool-set has not proven to be practical. Now, however, advances are being made to address ease of use for all participants within the PLM arena. One such advance is the availability of “role” specific user interfaces. Through tailorable UIs, the commands that are presented to users are appropriate to their function and expertise. These techniques include:• • • • • • • • • • • • •
Concurrent engineering workflow Industrial design Bottom–up design Top–down design Front-loading design workflow Design in context Modular design NPD new product development DFSS design for Six Sigma DFMA design for manufacture / assembly Digital simulation engineering Requirement-driven design Specification-managed validation
• Configuration management
17
Product lifecycle management
Concurrent engineering workflow Concurrent engineering (British English: simultaneous engineering) is a workflow that, instead of working sequentially through stages, carries out a number of tasks in parallel. For example: starting tool design as soon as the detailed design has started, and before the detailed designs of the product are finished; or starting on detail design solid models before the concept design surfaces models are complete. Although this does not necessarily reduce the amount of manpower required for a project, as more changes are required due to the incomplete and changing information, it does drastically reduce lead times and thus time to market. Feature-based CAD systems have for many years allowed the simultaneous work on 3D solid model and the 2D drawing by means of two separate files, with the drawing looking at the data in the model; when the model changes the drawing will associatively update. Some CAD packages also allow associative copying of geometry between files. This allows, for example, the copying of a part design into the files used by the tooling designer. The manufacturing engineer can then start work on tools before the final design freeze; when a design changes size or shape the tool geometry will then update. Concurrent engineering also has the added benefit of providing better and more immediate communication between departments, reducing the chance of costly, late design changes. It adopts a problem prevention method as compared to the problem solving and re-designing method of traditional sequential engineering.
Bottom–up design Bottom–up design (CAD-centric) occurs where the definition of 3D models of a product starts with the construction of individual components. These are then virtually brought together in sub-assemblies of more than one level until the full product is digitally defined. This is sometimes known as the review structure showing what the product will look like. The BOM contains all of the physical (solid) components; it may (but not also) contain other items required for the final product BOM such as paint, glue, oil and other materials commonly described as 'bulk items'. Bulk items typically have mass and quantities but are not usually modelled with geometry. Bottom–up design tends to focus on the capabilities of available real-world physical technology, implementing those solutions which this technology is most suited to. When these bottom–up solutions have real-world value, bottom–up design can be much more efficient than top–down design. The risk of bottom–up design is that it very efficiently provides solutions to low-value problems. The focus of bottom–up design is "what can we most efficiently do with this technology?" rather than the focus of top–down which is "What is the most valuable thing to do?"
Top–down design Top–down design is focused on high-level functional requirements, with relatively less focus on existing implementation technology. A top level spec is decomposed into lower and lower level structures and specifications, until the physical implementation layer is reached. The risk of a top–down design is that it will not take advantage of the most efficient applications of current physical technology, especially with respect to hardware implementation. Top–down design sometimes results in excessive layers of lower-level abstraction and inefficient performance when the Top–down model has followed an abstraction path which does not efficiently fit available physical-level technology. The positive value of top–down design is that it preserves a focus on the optimum solution requirements. A part-centric top–down design may eliminate some of the risks of top–down design. This starts with a layout model, often a simple 2D sketch defining basic sizes and some major defining parameters. Industrial design brings creative ideas to product development. Geometry from this is associatively copied down to the next level, which represents different subsystems of the product. The geometry in the sub-systems is then used to define more detail in levels below. Depending on the complexity of the product, a number of levels of this assembly are created until the basic definition of components can be identified, such as position and principal dimensions. This information is then associatively copied to component files. In these files the components are detailed; this is where the classic bottom–up assembly starts.
18
Product lifecycle management The top–down assembly is sometime known as a control structure. If a single file is used to define the layout and parameters for the review structure it is often known as a skeleton file. Defense engineering traditionally develops the product structure from the top down. The system engineering process[13] prescribes a functional decomposition of requirements and then physical allocation of product structure to the functions. This top down approach would normally have lower levels of the product structure developed from CAD data as a bottom–up structure or design.
Both-ends-against-the-middle design Both-ends-against-the-middle (BEATM) design is a design process that endeavors to combine the best features of top–down design, and bottom–up design into one process. A BEATM design process flow may begin with an emergent technology which suggests solutions which may have value, or it may begin with a top–down view of an important problem which needs a solution. In either case the key attribute of BEATM design methodology is to immediately focus at both ends of the design process flow: a top–down view of the solution requirements, and a bottom–up view of the available technology which may offer promise of an efficient solution. The BEATM design process proceeds from both ends in search of an optimum merging somewhere between the top–down requirements, and bottom–up efficient implementation. In this fashion, BEATM has been shown to genuinely offer the best of both methodologies. Indeed some of the best success stories from either top–down or bottom–up have been successful because of an intuitive, yet unconscious use of the BEATM methodology. When employed consciously, BEATM offers even more powerful advantages.
Front loading design and workflow Front loading is taking top–down design to the next stage. The complete control structure and review structure, as well as downstream data such as drawings, tooling development and CAM models, are constructed before the product has been defined or a project kick-off has been authorized. These assemblies of files constitute a template from which a family of products can be constructed. When the decision has been made to go with a new product, the parameters of the product are entered into the template model and all the associated data is updated. Obviously predefined associative models will not be able to predict all possibilities and will require additional work. The main principle is that a lot of the experimental/investigative work has already been completed. A lot of knowledge is built into these templates to be reused on new products. This does require additional resources “up front” but can drastically reduce the time between project kick-off and launch. Such methods do however require organizational changes, as considerable engineering efforts are moved into “offline” development departments. It can be seen as an analogy to creating a concept car to test new technology for future products, but in this case the work is directly used for the next product generation.
Design in context Individual components cannot be constructed in isolation. CAD and CaiD models of components are designed within the context of part or all of the product being developed. This is achieved using assembly modelling techniques. Other components’ geometry can be seen and referenced within the CAD tool being used. The other components within the sub-assembly may or may not have been constructed in the same system, their geometry being translated from other CPD formats. Some assembly checking such as DMU is also carried out using product visualization software.
19
Product lifecycle management
Product and process lifecycle management (PPLM) Product and process lifecycle management (PPLM) is an alternate genre of PLM in which the process by which the product is made is just as important as the product itself. Typically, this is the life sciences and advanced specialty chemicals markets. The process behind the manufacture of a given compound is a key element of the regulatory filing for a new drug application. As such, PPLM seeks to manage information around the development of the process in a similar fashion that baseline PLM talks about managing information around development of the product. One variant of PPLM implementations are Process Development Execution Systems (PDES). They typically implement the whole development cycle of high-tech manufacturing technology developments, from initial conception, through development and into manufacture. PDES integrate people with different backgrounds from potentially different legal entities, data, information and knowledge and business processes.
Market size Total spending on PLM software and services was estimated in 2006 to be above $15 billion a year.[14][15] Market growth estimates are in the area of 10%.
References Further reading • The Cost of PLM (http://plmtechnologyguide.com/site/?page_id=1184) • Saaksvuori, Antti (2008). Product Lifecycle Management. Springer. ISBN 978-3-540-78173-8. • Grieves, Michael (2005). Product Lifecycle Management: Driving the Next Generation of Lean Thinking. McGraw-Hill. ISBN 978-0-07-145230-4. • Stark, John (2004). Product Lifecycle Management: 21st century Paradigm for Product Realisation. Springer. ISBN 978-1-85233-810-7. • Stark, John (2006). Global Product: Strategy, Product Lifecycle Management and the Billion Customer Question. Springer. ISBN 978-1-84628-915-6. • Bergsjö, Dag (2009). Product Lifecycle Management – Architectural and Organisational Perspectives. Chalmers University of Technology. ISBN 978-91-7385-257-9.
20
Supplier relationship management
Supplier relationship management Supplier relationship management (SRM) is the discipline of strategically planning for, and managing, all interactions with third party organizations that supply goods and/or services to an organization in order to maximize the value of those interactions. In practice, SRM entails creating closer, more collaborative relationships with key suppliers in order to uncover and realize new value and reduce risk.
Overview Supplier relationship management (SRM) is the systematic, enterprise-wide assessment of suppliers’ assets and capabilities with respect to overall business strategy, determination of what activities to engage in with different suppliers, and planning and execution of all interactions with suppliers, in a coordinated fashion across the relationship life cycle, to maximize the value realized through those interactions.[1] The focus of SRM is to develop two-way, mutually beneficial relationships with strategic supply partners to deliver greater levels of innovation and competitive advantage than could be achieved by operating independently or through a traditional, transactional purchasing arrangement. In many fundamental ways, SRM is analogous to customer relationship management. Just as companies have multiple interactions over time with their customers, so too do they interact with suppliers – negotiating contracts, purchasing, managing logistics and delivery, collaborating on product design, etc. The starting point for defining SRM is a recognition that these various interactions with suppliers are not discrete and independent – instead they are accurately and usefully thought of as comprising a relationship, one which can and should be managed in a coordinated fashion across functional and business unit touch-points, and throughout the relationship lifecycle.[]
Components of SRM SRM necessitates a consistency of approach and a defined set of behaviours that foster trust over time. Effective SRM requires not only institutionalizing new ways of collaborating with key suppliers, but also actively dismantling existing policies and practices that can impede collaboration and limit the potential value that can be derived from key supplier relationships.[2] At the same time, SRM should entail reciprocal changes in processes and policies at suppliers.
Organizational structure While there is no one correct model for deploying SRM at an organizational level, there are sets of structural elements that are relevant in most contexts: 1. A formal SRM team or office at the corporate level. The purpose of such a group is to facilitate and coordinate SRM activities across functions and business units. SRM is inherently cross-functional, and requires a good combination of commercial, technical and interpersonal skills. These “softer” skills around communication, listening, influencing and managing change are critical to developing strong and trusting working relations. 2. A formal Relationship Manager or Supplier Account Manager role. Such individuals often sit within the business unit that interacts most frequently with that supplier, or may be filled by a category manager in the procurement function. This role can be a full-time, dedicated positions, although relationship management responsibilities may be part of broader roles depending on the complexity and importance of the supplier relationship (see Supplier Segmentation). SRM managers understand their suppliers’ business and strategic goals, and are able to see issues from the supplier’s point of view while balancing their own organization’s requirements and priorities. 3. An executive sponsor and, for complex, strategic supplier relationships, a cross-functional steering committee. These individuals form a clear link between SRM strategies and overall business strategies, serve to determine the relative prioritization among a company’s varying goals as they impact suppliers, and act as a dispute resolution
21
Supplier relationship management body.[]
Governance The SRM office and supply chain function are typically responsible for defining the SRM governance model, which includes a clear and jointly agreed governance framework in place for some top-tier strategic suppliers. Effective governance should comprise not only designation of senior executive sponsors at both customer and supplier and dedicated relationship managers, but also a face-off model connecting personnel in engineering, procurement, operations, quality and logistics with their supplier counterparts; a regular cadence of operational and strategic planning and review meetings; and well-defined escalation procedures to ensure speedy resolution of problems or conflicts at the appropriate organizational level.[3]
Supplier engagement model Effective supplier relationship management requires an enterprise-wide analysis of what activities to engage in with each supplier. The common practice of implementing a “one size fits all” approach to managing suppliers can stretch resources and limit the potential value that can be derived from strategic supplier relationships.[4] Supplier segmentation, in contrast, is about determining what kind of interactions to have with various suppliers, and how best to manage those interactions, not merely as a disconnected set of siloized transactions, but in a coordinated manner across the enterprise.[5] Suppliers can be segmented, not just by spend, but by the total potential value (measured across multiple dimensions) that can be realized through interactions with them. Further, suppliers can be segmented by the degree of risk to which the realization of that value is subject.[]
Joint activities Joint activities with suppliers might include; • Supplier summits, which bring together all strategic suppliers together to share the company’s strategy, provide feedback on its strategic supplier relationship management program, and solicit feedback and suggestions from key suppliers. • Executive-to-executive meetings • Strategic business planning meetings, where relationship leaders and technical experts meet to discuss joint opportunities, potential roadblocks to collaboration, activities and resources required, and share strategies and relevant market trends. Joint business planning meetings are often accompanied by a clear process to capture supplier ideas and innovations, direct them to relevant stakeholders, and ensure that they are evaluated for commercial suitability, and developed and implemented if they are deemed commercially viable. • Operational business reviews, where individuals responsible for day-to-day management of the relationship review progress on joint initiatives, operational performance, and risks.[]
Value measurement SRM delivers a competitive advantage by harnessing talent and ideas from key supply partners and translates this into product and service offerings for end customers. One tool for monitoring performance and identifying areas for improvement is the joint, two-way performance scorecard. A balanced scorecard includes a mixture of quantitative and qualitative measures, including how key participants perceive the quality of the relationship. These KPIs are shared between customer and supplier and reviewed jointly, reflecting the fact that the relationship is two-way and collaborative, and that strong performance on both sides is required for it to be successful. Advanced organizations conduct 360 degree scorecards, where strategic suppliers are also surveyed for feedback on their performance, the results of which are built into the scorecard. A practice of leading organizations is to track specific SRM savings generated at an individual supplier level, and also at an aggregated SRM program level, through existing procurement benefit measurement systems . Part of the
22
Supplier relationship management challenge in measuring the financial impact of SRM is that there are many ways SRM can contribute to financial performance. These include cost savings (e.g., most favoured customer pricing, joint efforts to improve design, manufacturing, and service delivery for greater efficiency); incremental revenue opportunities (e.g., gaining early or exclusive access to innovative supplier technology; joint efforts to develop innovative products, features, packaging, etc. avoiding stock-outs through joint demand forecasting); and improved management of risk. In a 2004 Vantage Partners study, respondents reported that on average, they could save just over $43 million to their bottom line by implementing supplier relationship management best practices.[6]
Systematic collaboration In practice, SRM expands the scope of interaction with key suppliers beyond traditional buy-sell transactions to encompass other joint activities which are predicated on a shift in perspective and a change in how relationships are managed, which may or may not entail significant investment. Such activities include: • Joint research and development • More disciplined and systematic, and often expanded, information sharing • Joint demand forecasting and process re-engineering (has unlocked savings of 10-30 percent for leading organizations).[]
Technology and systems There are a myriad of technology solutions which are purported to enable SRM. These systems can be used to gather and track supplier performance data across sites, business units, and/or regions. The benefit is a more comprehensive and objective picture of supplier performance, which can be used to make better sourcing decisions, as well as identify and address systemic supplier performance problems. It is important to note that SRM software, while valuable, cannot be implemented in the absence of the other business structure and process changes that are recommended as part of implementing SRM as a strategy.[]
Challenges • • • •
Creating the business case Executive sponsorship Calculating ROI Developing an SRM sales pitch
SRM and supplier performance management Some confusion may exist over the difference between supplier performance management (SPM) and SRM. SPM is a subset of SRM. A simple way of expressing the difference between SPM and SRM is that the former is about ensuring the supplier delivers what has been promised in the contract, which suggests a narrow, one-way process. SRM, in contrast, is about collaboratively driving value for both parties, resulting in lower costs, reduced risk, greater efficiency, better quality, and access to innovation.[7] This requires a focus on both negotiating the contract and managing the resulting relationship throughout implementation, as well as systematic joint value-discovery efforts.[8]
23
Supplier relationship management
References
Supply chain management Supply chain management (SCM) is the management of an interconnected or interlinked between network, channel and node businesses involved in the provision of product and service packages required by the end customers in a supply chain.[2] Supply chain management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption. Another definition is provided by the APICS Dictionary when it defines SCM as the "design, planning, Supply chain management managing complex and dynamic supply and demand execution, control, and monitoring of [1] networks. (cf. Wieland/Wallenburg, 2011) supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand and measuring performance globally." SCM draws heavily from the areas of operations management, logistics, procurement, information technology and strives for an integrated approach.
Origin of the term and definitions The term "supply chain management" entered the public domain when Keith Oliver, a consultant at Booz Allen Hamilton, used it in an interview for the Financial Times in 1982. The term was slow to take hold and the lexicon is slow to change. It gained currency in the mid-1990s, when a flurry of articles and books came out on the subject. In the late 1990s it rose to prominence as a management buzzword, and operations managers began to use it in their titles with increasing regularity.[3][4][5] Common and accepted definitions of supply chain management are: • Managing upstream and down stream value added flow of materials, final goods and related information among suppliers; company; resellers; final consumers is supply chain management. • Supply chain management is the systematic, strategic coordination of the traditional business functions and the tactics across these business functions within a particular company and across businesses within the supply chain, for the purposes of improving the long-term performance of the individual companies and the supply chain as a whole (Mentzer et al., 2001).[6] • A customer focused definition is given by Hines (2004:p76) "Supply chain strategies require a total systems view of the linkages in the chain that work together efficiently to create customer satisfaction at the end point of delivery to the consumer. As a consequence costs must be lowered throughout the chain by driving out unnecessary costs and focusing attention on adding value. Throughput efficiency must be increased, bottlenecks
24
Supply chain management removed and performance measurement must focus on total systems efficiency and equitable reward distribution to those in the supply chain adding value. The supply chain system must be responsive to customer requirements."[7] • Global supply chain forum - supply chain management is the integration of key business processes across the supply chain for the purpose of creating value for customers and stakeholders (Lambert, 2008).[8] • According to the Council of Supply Chain Management Professionals (CSCMP), supply chain management encompasses the planning and management of all activities involved in sourcing, procurement, conversion, and logistics management. It also includes the crucial components of coordination and collaboration with channel partners, which can be suppliers, intermediaries, third-party service providers, and customers. In essence, supply chain management integrates supply and demand management within and across companies. More recently, the loosely coupled, self-organizing network of businesses that cooperate to provide product and service offerings has been called the Extended Enterprise. A supply chain, as opposed to supply chain management, is a set of organizations directly linked by one or more of the upstream and downstream flows of products, services, finances, and information from a source to a customer. Managing a supply chain is 'supply chain management' (Mentzer et al., 2001).[6] Supply chain management software includes tools or modules used to execute supply chain transactions, manage supplier relationships and control associated business processes. Supply chain event management (abbreviated as SCEM) is a consideration of all possible events and factors that can disrupt a supply chain. With SCEM possible scenarios can be created and solutions devised. In many cases the supply chain includes the collection of goods after consumer use for recycling. Including 3PL or other gathering agencies as part of the RM re-patriation process is a way of illustrating the new end-game strategy.
Problems addressed Supply chain management must address the following problems: • Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers. • Distribution Strategy: questions of operating control (e.g. centralized, decentralized or shared); delivery scheme (e.g. direct shipment, pool point shipping, cross docking, direct store delivery (DSD), closed loop shipping); mode of transportation (e.g. motor carrier, including truckload, Less than truckload (LTL), parcel, railroad, intermodal transport, including trailer on flatcar (TOFC) and container on flatcar (COFC), ocean freight, airfreight); replenishment strategy (e.g. pull, push or hybrid); and transportation control (e.g. owner-operated, private carrier, common carrier, contract carrier, or third-party logistics (3PL)). • Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than LTL shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. These trade-offs are key to developing the most efficient and effective Logistics and SCM strategy. • Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc. • Inventory Management: Quantity and location of inventory, including raw materials, work-in-process (WIP) and finished goods. • Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.
25
Supply chain management Supply chain execution means managing and coordinating the movement of materials, information and funds across the supply chain. The flow is bi-directional. SCM applications provide real-time analytical systems that manage the flow of product and information throughout the enterprise supply chain network.
Activities/functions Supply chain management is a cross-function approach including managing the movement of raw materials into an organization, certain aspects of the internal processing of materials into finished goods, and the movement of finished goods out of the organization and toward the end-consumer. As organizations strive to focus on core competencies and becoming more flexible, they reduce their ownership of raw materials sources and distribution channels. These functions are increasingly being outsourced to other entities that can perform the activities better or more cost effectively. The effect is to increase the number of organizations involved in satisfying customer demand, while reducing management control of daily logistics operations. Less control and more supply chain partners led to the creation of supply chain management concepts. The purpose of supply chain management is to improve trust and collaboration among supply chain partners, thus improving inventory visibility and the velocity of inventory movement. Several models have been proposed for understanding the activities required to manage material movements across organizational and functional boundaries. Supply Chain Operations Reference (SCOR) is a supply chain management model promoted by the Supply Chain Council. Another model is the SCM Model proposed by the Global Supply Chain Forum (GSCF). Supply chain activities can be grouped into strategic, tactical, and operational levels. The CSCMP has adopted The American Productivity & Quality Center (APQC) Process Classification FrameworkSM a high-level, industry-neutral enterprise process model that allows organizations to see their business processes from a cross-industry viewpoint.[9]
Strategic • Strategic network optimization, including the number, location, and size of warehousing, distribution centers, and facilities. • Strategic logistics network optimization, including the use of cross-docks and transportation mode. • Strategic partnerships with suppliers, distributors, and customers, creating communication channels for critical information and operational improvements such as cross docking, direct shipping, and third-party logistics. • Product life cycle management, so that new and existing products can be optimally integrated into the supply chain and capacity management activities. • Segmentation of products and customers to guide alignment of corporate objectives with manufacturing and distribution strategy. • Information technology chain operations. • Where-to-make and make-buy decisions. • Aligning overall organizational strategy with supply strategy. • It is for long term and needs resource commitment.
26
Supply chain management
Tactical level • • • • •
Sourcing contracts and other purchasing decisions. Production decisions, including contracting, scheduling, and planning process definition. Inventory decisions, including quantity, location, and quality of inventory. Transportation strategy, including frequency, routes, and contracting. Benchmarking of all operations against competitors and implementation of best practices throughout the enterprise. • Milestone payments. • Focus on customer demand and Habits.
Operational level • Daily production and distribution planning, including all nodes in the supply chain. • Production scheduling for each manufacturing facility in the supply chain (minute by minute). • Demand planning and forecasting, coordinating the demand forecast of all customers and sharing the forecast with all suppliers. • Sourcing planning, including current inventory and forecast demand, in collaboration with all suppliers. • Inbound operations, including transportation from suppliers and receiving inventory. • Production operations, including the consumption of materials and flow of finished goods. • Outbound operations, including all fulfillment activities, warehousing and transportation to customers. • Order promising, accounting for all constraints in the supply chain, including all suppliers, manufacturing facilities, distribution centers, and other customers. • From production level to supply level accounting all transit damage cases & arrange to settlement at customer level by maintaining company loss through insurance company. • Managing non-moving, short-dated inventory and avoiding more products to go short-dated.
Importance Organizations increasingly find that they must rely on effective supply chains, or networks, to compete in the global market and networked economy.[10] In Peter Drucker's (1998) new management paradigms, this concept of business relationships extends beyond traditional enterprise boundaries and seeks to organize entire business processes throughout a value chain of multiple companies. During the past decades, globalization, outsourcing and information technology have enabled many organizations, such as Dell and Hewlett Packard, to successfully operate solid collaborative supply networks in which each specialized business partner focuses on only a few key strategic activities (Scott, 1993). This inter-organizational supply network can be acknowledged as a new form of organization. However, with the complicated interactions among the players, the network structure fits neither "market" nor "hierarchy" categories (Powell, 1990). It is not clear what kind of performance impacts different supply network structures could have on firms, and little is known about the coordination conditions and trade-offs that may exist among the players. From a systems perspective, a complex network structure can be decomposed into individual component firms (Zhang and Dilts, 2004). Traditionally, companies in a supply network concentrate on the inputs and outputs of the processes, with little concern for the internal management working of other individual players. Therefore, the choice of an internal management control structure is known to impact local firm performance (Mintzberg, 1979). In the 21st century, changes in the business environment have contributed to the development of supply chain networks. First, as an outcome of globalization and the proliferation of multinational companies, joint ventures, strategic alliances and business partnerships, significant success factors were identified, complementing the earlier "Just-In-Time", Lean Manufacturing and Agile manufacturing practices.[11] Second, technological changes, particularly the dramatic fall in information communication costs, which are a significant component of transaction
27
Supply chain management costs, have led to changes in coordination among the members of the supply chain network (Coase, 1998). Many researchers have recognized these kinds of supply network structures as a new organization form, using terms such as "Keiretsu", "Extended Enterprise", "Virtual Corporation", "Global Production Network", and "Next Generation Manufacturing System".[12] In general, such a structure can be defined as "a group of semi-independent organizations, each with their capabilities, which collaborate in ever-changing constellations to serve one or more markets in order to achieve some business goal specific to that collaboration" (Akkermans, 2001). The security management system for supply chains is described in ISO/IEC 28000 and ISO/IEC 28001 and related standards published jointly by ISO and IEC
Historical developments Six major movements can be observed in the evolution of supply chain management studies: Creation, Integration, and Globalization (Movahedi et al., 2009), Specialization Phases One and Two, and SCM 2.0.
Creation era The term supply chain management was first coined by Keith Oliver in 1982. However, the concept of a supply chain in management was of great importance long before, in the early 20th century, especially with the creation of the assembly line. The characteristics of this era of supply chain management include the need for large-scale changes, re-engineering, downsizing driven by cost reduction programs, and widespread attention to the Japanese practice of management.
Integration era This era of supply chain management studies was highlighted with the development of Electronic Data Interchange (EDI) systems in the 1960s and developed through the 1990s by the introduction of Enterprise Resource Planning (ERP) systems. This era has continued to develop into the 21st century with the expansion of internet-based collaborative systems. This era of supply chain evolution is characterized by both increasing value-adding and cost reductions through integration. In fact a supply chain can be classified as a Stage 1, 2 or 3 network. In stage 1 type supply chain, various systems such as Make, Storage, Distribution, Material control, etc. are not linked and are independent of each other. In a stage 2 supply chain, these are integrated under one plan and is ERP enabled. A stage 3 supply chain is one in which vertical integration with the suppliers in upstream direction and customers in downstream direction is achieved. An example of this kind of supply chain is Tesco.
Globalization era The third movement of supply chain management development, the globalization era, can be characterized by the attention given to global systems of supplier relationships and the expansion of supply chains over national boundaries and into other continents. Although the use of global sources in the supply chain of organizations can be traced back several decades (e.g., in the oil industry), it was not until the late 1980s that a considerable number of organizations started to integrate global sources into their core business. This era is characterized by the globalization of supply chain management in organizations with the goal of increasing their competitive advantage, value-adding, and reducing costs through global sourcing.However it was not until the late 1980s that a considerable number of organizations started to integrate global sources into their core business.
28
Supply chain management
Specialization era (phase I): outsourced manufacturing and distribution In the 1990s, industries began to focus on “core competencies” and adopted a specialization model. Companies abandoned vertical integration, sold off non-core operations, and outsourced those functions to other companies. This changed management requirements by extending the supply chain well beyond company walls and distributing management across specialized supply chain partnerships. This transition also re-focused the fundamental perspectives of each respective organization. OEMs became brand owners that needed deep visibility into their supply base. They had to control the entire supply chain from above instead of from within. Contract manufacturers had to manage bills of material with different part numbering schemes from multiple OEMs and support customer requests for work -in-process visibility and vendor-managed inventory (VMI). The specialization model creates manufacturing and distribution networks composed of multiple, individual supply chains specific to products, suppliers, and customers who work together to design, manufacture, distribute, market, sell, and service a product. The set of partners may change according to a given market, region, or channel, resulting in a proliferation of trading partner environments, each with its own unique characteristics and demands.
Specialization era (phase II): supply chain management as a service Specialization within the supply chain began in the 1980s with the inception of transportation brokerages, warehouse management, and non-asset-based carriers and has matured beyond transportation and logistics into aspects of supply planning, collaboration, execution and performance management. At any given moment, market forces could demand changes from suppliers, logistics providers, locations and customers, and from any number of these specialized participants as components of supply chain networks. This variability has significant effects on the supply chain infrastructure, from the foundation layers of establishing and managing the electronic communication between the trading partners to more complex requirements including the configuration of the processes and work flows that are essential to the management of the network itself. Supply chain specialization enables companies to improve their overall competencies in the same way that outsourced manufacturing and distribution has done; it allows them to focus on their core competencies and assemble networks of specific, best-in-class partners to contribute to the overall value chain itself, thereby increasing overall performance and efficiency. The ability to quickly obtain and deploy this domain-specific supply chain expertise without developing and maintaining an entirely unique and complex competency in house is the leading reason why supply chain specialization is gaining popularity. Outsourced technology hosting for supply chain solutions debuted in the late 1990s and has taken root primarily in transportation and collaboration categories. This has progressed from the Application Service Provider (ASP) model from approximately 1998 through 2003 to the On-Demand model from approximately 2003-2006 to the Software as a Service (SaaS) model currently in focus today.
Supply chain management 2.0 (SCM 2.0) Building on globalization and specialization, the term SCM 2.0 has been coined to describe both the changes within the supply chain itself as well as the evolution of the processes, methods and tools that manage it in this new "era". The growing popularity of collaborative platforms is highlighted by the rise of TradeCard’s supply chain collaboration platform which connects multiple buyers and suppliers with financial institutions, enabling them to conduct automated supply chain finance transactions.[13] Web 2.0 is defined as a trend in the use of the World Wide Web that is meant to increase creativity, information sharing, and collaboration among users. At its core, the common attribute that Web 2.0 brings is to help navigate the vast amount of information available on the Web in order to find what is being sought. It is the notion of a usable pathway. SCM 2.0 follows this notion into supply chain operations. It is the pathway to SCM results, a combination
29
Supply chain management of the processes, methodologies, tools and delivery options to guide companies to their results quickly as the complexity and speed of the supply chain increase due to the effects of global competition, rapid price fluctuations, surging oil prices, short product life cycles, expanded specialization, near-/far- and off-shoring, and talent scarcity. SCM 2.0 leverages proven solutions designed to rapidly deliver results with the agility to quickly manage future change for continuous flexibility, value and success. This is delivered through competency networks composed of best-of-breed supply chain domain expertise to understand which elements, both operationally and organizationally, are the critical few that deliver the results as well as through intimate understanding of how to manage these elements to achieve desired results. Finally, the solutions are delivered in a variety of options, such as no-touch via business process outsourcing, mid-touch via managed services and software as a service (SaaS), or high touch in the traditional software deployment model.
Business process integration Successful SCM requires a change from managing individual functions to integrating activities into key supply chain processes. An example scenario: the purchasing department places orders as requirements become known. The marketing department, responding to customer demand, communicates with several distributors and retailers as it attempts to determine ways to satisfy this demand. Information shared between supply chain partners can only be fully leveraged through process integration. Supply chain business process integration involves collaborative work between buyers and suppliers, joint product development, common systems and shared information. According to Lambert and Cooper (2000), operating an integrated supply chain requires a continuous information flow. However, in many companies, management has reached the conclusion that optimizing the product flows cannot be accomplished without implementing a process approach to the business. The key supply chain processes stated by Lambert (2004)[14] are: • • • • • • • •
Customer relationship management Customer service management Demand management style Order fulfillment Manufacturing flow management Supplier relationship management Product development and commercialization Returns management
Much has been written about demand management. Best-in-Class companies have similar characteristics, which include the following: a) Internal and external collaboration b) Lead time reduction initiatives c) Tighter feedback from customer and market demand d) Customer level forecasting One could suggest other key critical supply business processes which combine these processes stated by Lambert such as: a. b. c. d. e. f. g. h.
Customer service management Procurement Product development and commercialization Manufacturing flow management/support Physical distribution Outsourcing/partnerships Performance measurement Warehousing management
a) Customer service management process
30
Supply chain management Customer Relationship Management concerns the relationship between the organization and its customers. Customer service is the source of customer information. It also provides the customer with real-time information on scheduling and product availability through interfaces with the company's production and distribution operations. Successful organizations use the following steps to build customer relationships: • determine mutually satisfying goals for organization and customers • establish and maintain customer rapport • produce positive feelings in the organization and the customers b) Procurement process Strategic plans are drawn up with suppliers to support the manufacturing flow management process and the development of new products. In firms where operations extend globally, sourcing should be managed on a global basis. The desired outcome is a win-win relationship where both parties benefit, and a reduction in time required for the design cycle and product development. Also, the purchasing function develops rapid communication systems, such as electronic data interchange (EDI) and Internet linkage to convey possible requirements more rapidly. Activities related to obtaining products and materials from outside suppliers involve resource planning, supply sourcing, negotiation, order placement, inbound transportation, storage, handling and quality assurance, many of which include the responsibility to coordinate with suppliers on matters of scheduling, supply continuity, hedging, and research into new sources or programs. c) Product development and commercialization Here, customers and suppliers must be integrated into the product development process in order to reduce time to market. As product life cycles shorten, the appropriate products must be developed and successfully launched with ever shorter time-schedules to remain competitive. According to Lambert and Cooper (2000), managers of the product development and commercialization process must: 1. coordinate with customer relationship management to identify customer-articulated needs; 2. select materials and suppliers in conjunction with procurement, and 3. develop production technology in manufacturing flow to manufacture and integrate into the best supply chain flow for the product/market combination. d) Manufacturing flow management process The manufacturing process produces and supplies products to the distribution channels based on past forecasts. Manufacturing processes must be flexible to respond to market changes and must accommodate mass customization. Orders are processes operating on a just-in-time (JIT) basis in minimum lot sizes. Also, changes in the manufacturing flow process lead to shorter cycle times, meaning improved responsiveness and efficiency in meeting customer demand. Activities related to planning, scheduling and supporting manufacturing operations, such as work-in-process storage, handling, transportation, and time phasing of components, inventory at manufacturing sites and maximum flexibility in the coordination of geographic and final assemblies postponement of physical distribution operations. e) Physical distribution This concerns movement of a finished product/service to customers. In physical distribution, the customer is the final destination of a marketing channel, and the availability of the product/service is a vital part of each channel participant's marketing effort. It is also through the physical distribution process that the time and space of customer service become an integral part of marketing, thus it links a marketing channel with its customers (e.g., links manufacturers, wholesalers, retailers). f) Outsourcing/partnerships This is not just outsourcing the procurement of materials and components, but also outsourcing of services that traditionally have been provided in-house. The logic of this trend is that the company will increasingly focus on
31
Supply chain management those activities in the value chain where it has a distinctive advantage, and outsource everything else. This movement has been particularly evident in logistics where the provision of transport, warehousing and inventory control is increasingly subcontracted to specialists or logistics partners. Also, managing and controlling this network of partners and suppliers requires a blend of both central and local involvement. Hence, strategic decisions need to be taken centrally, with the monitoring and control of supplier performance and day-to-day liaison with logistics partners being best managed at a local level. g) Performance measurement Experts found a strong relationship from the largest arcs of supplier and customer integration to market share and profitability. Taking advantage of supplier capabilities and emphasizing a long-term supply chain perspective in customer relationships can both be correlated with firm performance. As logistics competency becomes a more critical factor in creating and maintaining competitive advantage, logistics measurement becomes increasingly important because the difference between profitable and unprofitable operations becomes more narrow. A.T. Kearney Consultants (1985) noted that firms engaging in comprehensive performance measurement realized improvements in overall productivity. According to experts, internal measures are generally collected and analyzed by the firm including 1. Cost 2. Customer Service 3. Productivity measures 4. Asset measurement, and 5. Quality. External performance measurement is examined through customer perception measures and "best practice" benchmarking, and includes 1) customer perception measurement, and 2) best practice benchmarking. h) Warehousing management As a case of reducing company cost & expenses, warehousing management is carrying the valuable role against operations. In case of perfect storing & office with all convenient facilities in company level, reducing manpower cost, dispatching authority with on time delivery, loading & unloading facilities with proper area, area for service station, stock management system etc. Components of supply chain management are as follows: 1. Standardization 2. Postponement 3. Customization 4.te
Theories Currently there is a gap in the literature available on supply chain management studies: there is no theoretical support for explaining the existence and the boundaries of supply chain management. A few authors such as Halldorsson, et al. (2003), Ketchen and Hult (2006) and Lavassani, et al. (2009) have tried to provide theoretical foundations for different areas related to supply chain by employing organizational theories. These theories include: • • • • • • • • •
Resource-based view (RBV) Transaction Cost Analysis (TCA) Knowledge-Based View (KBV) Strategic Choice Theory (SCT) Agency Theory (AT) Channel coordination Institutional theory (InT) Systems Theory (ST) Network Perspective (NP)
• Materials Logistics Management (MLM) • Just-in-Time (JIT)
32
Supply chain management • • • • • • • • • • • •
Material Requirements Planning (MRP) Theory of Constraints (TOC) Performance Information Procurement Systems (PIPS) [15] Performance Information Risk Management System (PIRMS) [15] Total Quality Management (TQM) Agile Manufacturing Time Based Competition (TBC) Quick Response Manufacturing (QRM) Customer Relationship Management (CRM) Requirements Chain Management (RCM) Available-to-promise (ATP) and many more
However, the unit of analysis of most of these theories is not the system “supply chain”, but another system such as the “firm” or the “supplier/buyer relationship”. Among the few exceptions is the relational view, which outlines a theory for considering dyads and networks of firms as a key unit of analysis for explaining superior individual firm performance (Dyer and Singh, 1998).[16]
Supply chain centroids In the study of supply chain management, the concept of centroids has become an important economic consideration. A centroid is a place that has a high proportion of a country’s population and a high proportion of its manufacturing, generally within 500 mi (805 km). In the U.S., two major supply chain centroids have been defined, one near Dayton, Ohio and a second near Riverside, California. The centroid near Dayton is particularly important because it is closest to the population center of the US and Canada. Dayton is within 500 miles of 60% of the population and manufacturing capacity of the U.S., as well as 60 percent of Canada’s population.[17] The region includes the Interstate 70/75 interchange, which is one of the busiest in the nation with 154,000 vehicles passing through in a day. Of those, anywhere between 30 percent and 35 percent are trucks hauling goods. In addition, the I-75 corridor is home to the busiest north-south rail route east of the Mississippi.[17]
Tax efficient supply chain management Tax efficient supply chain management is a business model which considers the effect of tax in design and implementation of supply chain management. As the consequence of globalization, businesses which are cross-national should pay different tax rates in different countries. Due to the differences, global players have the opportunity to calculate and optimize supply chain based on tax efficiency[18] legally. It is used as a method of gaining more profit for company which owns global supply chain.
Supply chain sustainability Supply chain sustainability is a business issue affecting an organization’s supply chain or logistics network and is frequently quantified by comparison with SECH ratings (which is incorporates 3 aspects in the triple bottom line; economic, social and environmental dimensions).[19] SECH ratings are defined as social, ethical, cultural and health footprints. Consumers have become more aware of the environmental impact of their purchases and companies’ SECH ratings and, along with non-governmental organizations (NGOs), are setting the agenda for transitions to organically-grown foods, anti-sweatshop labor codes and locally-produced goods that support independent and small businesses. Because supply chains frequently account for over 75% of a company’s carbon footprint many organizations are exploring how they can reduce this and thus improve their SECH rating.
33
Supply chain management For example, in July, 2009 the U.S. based Wal-Mart corporation announced its intentions to create a global sustainability index that would rate products according to the environmental and social impact made while the products were manufactured and distributed. The sustainability rating index is intended to create environmental accountability in Wal-Mart's supply chain, and provide the motivation and infrastructure for other retail industry companies to do the same.[20] More recently, the US Dodd-Frank Wall Street Reform and Consumer Protection Act signed into law by President Obama in July 2010, contained a supply chain sustainability provision in the form of the Conflict Minerals law. This law requires SEC-regulated companies to conduct third party audits of the company supply chains, determine whether any tin, tantalum, tungsten or gold (together referred to as conflict minerals) is made of ore mined/sourced from the Democratic Republic of the Congo (DRC), and create a report (available to the general public and SEC) detailing the supply chain due diligence efforts undertaken and the results of the audit.[21] Of course, the chain of suppliers/vendors to these reporting companies will be expected to provide appropriate supporting information.
Components Management components The SCM components are the third element of the four-square circulation framework. The level of integration and management of a business process link is a function of the number and level, ranging from low to high, of components added to the link (Ellram and Cooper, 1990; Houlihan, 1985). Consequently, adding more management components or increasing the level of each component can increase the level of integration of the business process link. The literature on business process re-engineering,[22] buyer-supplier relationships,[23] and SCM[24] suggests various possible components that must receive managerial attention when managing supply relationships. Lambert and Cooper (2000) identified the following components: • • • • • • • • •
Planning and control Work structure Organization structure Product flow facility structure Information flow facility structure Management methods Power and leadership structure Risk and reward structure Culture and attitude
However, a more careful examination of the existing literature[25] leads to a more comprehensive understanding of what should be the key critical supply chain components, the "branches" of the previous identified supply chain business processes, that is, what kind of relationship the components may have that are related to suppliers and customers. Bowersox and Closs states that the emphasis on cooperation represents the synergism leading to the highest level of joint achievement (Bowersox and Closs, 1996). A primary level channel participant is a business that is willing to participate in the inventory ownership responsibility or assume other aspects of financial risk, thus including primary level components (Bowersox and Closs, 1996). A secondary level participant (specialized) is a business that participates in channel relationships by performing essential services for primary participants, including secondary level components, which support primary participants. Third level channel participants and components that support the primary level channel participants and are the fundamental branches of the secondary level components may also be included. Consequently, Lambert and Cooper's framework of supply chain components does not lead to any conclusion about what are the primary or secondary (specialized) level supply chain components (see Bowersox and Closs, 1996, p. 93). That is, what supply chain components should be viewed as primary or secondary, how should these
34
Supply chain management components be structured in order to have a more comprehensive supply chain structure, and how to examine the supply chain as an integrative one (See above sections 2.1 and 3.1).
Reverse supply chain Reverse logistics is the process of managing the return of goods. Reverse logistics is also referred to as "Aftermarket Customer Services". In other words, anytime money is taken from a company's warranty reserve or service logistics budget one can speak of a reverse logistics operation.
Systems and value Supply chain systems configure value for those that organize the networks. Value is the additional revenue over and above the costs of building the network. Co-creating value and sharing the benefits appropriately to encourage effective participation is a key challenge for any supply system. Tony Hines defines value as follows: “Ultimately it is the customer who pays the price for service delivered that confirms value and not the producer who simply adds cost until that point”[7]
Global applications Global supply chains pose challenges regarding both quantity and value: Supply and value chain trends • • • • •
Globalization Increased cross border sourcing Collaboration for parts of value chain with low-cost providers Shared service centers for logistical and administrative functions Increasingly global operations, which require increasingly global coordination and planning to achieve global optimums • Complex problems involve also midsized companies to an increasing degree, These trends have many benefits for manufacturers because they make possible larger lot sizes, lower taxes, and better environments (culture, infrastructure, special tax zones, sophisticated OEM) for their products. Meanwhile, on top of the problems recognized in supply chain management, there will be many more challenges when the scope of supply chains is global. This is because with a supply chain of a larger scope, the lead time is much longer. Furthermore, there are more issues involved such as multi-currencies, different policies and different laws. The consequent problems include:1. different currencies and valuations in different countries; 2. different tax laws (Tax Efficient Supply Chain Management); 3. different trading protocols; 4. lack of transparency of cost and profit.
Certification There are several certification programmes for Supply Chain Management staff development including APICS (the Association for Operations Management), ISCEA (The International Supply Chain Education Alliance) and IOSCM (Institute of Supply Chain Management). APICS' certification is called Certified Supply Chain Professional, or CSCP, and ISCEA'S certification is called the Certified Supply Chain Manager (CSCM). Another, the Institute for Supply Management, is developing one called the Certified Professional in Supply Management (CPSM)[26] focused on the Procurement and Sourcing areas of Supply Chain Management, also called Supply management. Purchasing Management Association of Canada is the main certifying body for Canada with the designations having global recipricocity. The designation Supply Chain Management Professional (SCMP) is the main designation with several others that progress toward the SCMP. Topics addressed by selected professional supply chain certification programmes: [26][27] (updated)
35
Supply chain management
Awarding Body
36
Institute for Institute for The The American International International Institute of Supply Supply Association Association Society of Supply Supply Supply Management Management for for Transportation Chain Chain Chain (ISM) (ISM) Operations Operations and Logistics Education Education Management Certified Certified Management Management (AST&L) Alliance Alliance (IOSCM) Purchasing Professional (APICS) (APICS) Certification in (ISCEA) (ISCEA) Manager in Supply Certified Certified Transportation Certified Certified (CPM) Management Production Supply and Logistics Supply Supply (CPSM) and Chain (CTL) Chain Chain Inventory Professional Manager Analyst Management (CSCP) (CSCM) (CSCA) (CPIM)
Procurement
High
High
Low
High
Low
High
High
High
Strategic Sourcing
Low
High
Low
Low
Low
High
Low
Low
New Product Development
Low
High
Low
High
Low
Low
Low
Low
Production, Lot Sizing
Low
Low
High
Low
High
Low
Low
High
Quality
High
High
High
High
Low
Low
Low
High
Lean Six Sigma
Low
Low
Low
Low
Low
High
High
Low
Inventory Management
High
High
High
High
High
High
High
High
Warehouse Management
Low
Low
Low
Low
High
Low
High
High
Network Design
Low
Low
High
Low
High
High
High
Low
Transportation High
Low
High
Low
High
High
High
High
Demand Management, S&OP
Low
High
High
High
High
High
High
High
Integrated SCM
High
Low
Low
High
High
High
High
High
CRM, Customer Service
Low
Low
Low
High
Low
High
Low
High
Pricing
Low
Low
Low
Low
Low
Yes
Yes
Low
Risk Management
Low
High
High
Low
Low
Low
Low
High
Project Management
Low
High
High
Low
Low
Yes
Low
High
Leadership, People Management
High
High
High
Low
Low
High
Low
High
Technology
High
Low
Low
High
High
High
High
High
Theory of Constraints
Low
Low
Low
Low
Low
High
High
Low
Supply chain management
Operational Accounting
High
37 High
Low
Low
Low
High
Low
Low
References [1] cf. Andreas Wieland, Carl Marcus Wallenburg (2011): Supply-Chain-Management in stürmischen Zeiten. Berlin. [2] Harland, C.M. (1996) Supply Chain Management, Purchasing and Supply Management, Logistics, Vertical Integration, Materials Management and Supply Chain Dynamics. In: Slack, N (ed.) Blackwell Encyclopedic Dictionary of Operations Management. UK: Blackwell. [3] David Jacoby (2009), Guide to Supply Chain Management: How Getting it Right Boosts Corporate Performance (The Economist Books), Bloomberg Press; 1st edition, ISBN 978-1576603451 [4] Andrew Feller, Dan Shunk, & Tom Callarman (2006). BPTrends, March 2006 - Value Chains Vs. Supply Chains [5] David Blanchard (2010), Supply Chain Management Best Practices, 2nd. Edition, John Wiley & Sons, ISBN 9780470531884 [6] Mentzer, J.T. et. al. (2001): Defining Supply Chain Management, in: Journal of Business Logistics, Vol. 22, No. 2, 2001, pp. 1–25 [7] Hines, T. 2004. Supply chain strategies: Customer driven and customer focused. Oxford: Elsevier. [8] Cooper et. al., 1997 [9] CSCMP Supply Chain Management Process Standards [10] Baziotopoulos, 2004 [11] MacDuffie and Helper, 1997; Monden, 1993; Womack and Jones, 1996; Gunasekaran, 1999 [12] Drucker, 1998; Tapscott, 1996; Dilts, 1999 [13] Trade Services and the Supply Chain (http:/ / www. cgi. com/ sites/ cgi. com/ files/ GTR_AcceleratingSupplyChainFinance_Starace_Quote_e. pdf) [14] Lambert, Douglas M. Supply Chain Management: Processes, Partnerships, Performance (http:/ / www. scm-institute. org), 3rd edition, 2008. [15] http:/ / pbsrg. com/ best-value-model/ [16] http:/ / dx. doi. org/ 10. 2307/ 259056 [17] Doug Page, "Dayton Region a Crucial Hub for Supply Chain Management" (http:/ / www. daytondailynews. com/ business/ dayton-region-a-crucial-hub-for-supply-chain-managment-457836. html), Dayton Daily News, 2009-12-21. [18] Investor Words definition of "tax efficient" (http:/ / www. investorwords. com/ 4893/ tax_efficient. html) [19] Khairul Anuar Rusli, Azmawani Abd Rahman and Ho, J.A. Green Supply Chain Management in Developing Countries: A Study of Factors and Practices in Malaysia. Paper presented at the 11th International Annual Symposium on Sustainability Science and Management (UMTAS) 2012, Kuala Terengganu, 9–11 July 2012. See publication here (http:/ / fullpaperumtas2012. umt. edu. my/ files/ 2012/ 07/ BE03-ORAL-PP-278-285. pdf) [20] Wal-Mart's Sustainability Index and Supply Chain Green Standards (http:/ / retailindustry. about. com/ b/ 2009/ 07/ 20/ u-s-green-retailing-update-will-wal-mart-profit-from-high-supply-chain-standards-while-its-own-environmental-standards-are-low. htm) [21] http:/ / en. wikipedia. org/ wiki/ Conflict_minerals [22] Macneil ,1975; Williamson, 1974; Hewitt, 1994 [23] Stevens, 1989; Ellram and Cooper, 1993; Ellram and Cooper, 1990; Houlihan, 1985 [24] Cooper et al., 1997; Lambert et al.,1996; Turnbull, 1990 [25] Zhang and Dilts, 2004 ;Vickery et al., 2003; Hemila, 2002; Christopher, 1998; Joyce et al., 1997; Bowersox and Closs, 1996; Williamson, 1991; Courtright et al., 1989; Hofstede, 1978 [26] David Jacoby, 2009, Guide to Supply Chain Management: How Getting it Right Boosts Corporate Performance (The Economist Books), Bloomberg Press; 1st edition, ISBN 978-1576603451. Chapter 10, Organising, training and developing staff [27] Boston Strategies International
Notes • Cooper, M.C., Lambert, D.M., & Pagh, J. (1997) Supply Chain Management: More Than a New Name for Logistics. The International Journal of Logistics Management Vol 8, Iss 1, pp 1–14 • FAO, 2007, Agro-industrial supply chain management: Concepts and applications. AGSF Occasional Paper 17 Rome. (http://www.fao.org/ag/ags/publications/docs/AGSF_OccassionalPapers/agsfop17.pdf) • Haag, S., Cummings, M., McCubbrey, D., Pinsonneault, A., & Donovan, R. (2006), Management Information Systems For the Information Age (3rd Canadian Ed.), Canada: McGraw Hill Ryerson ISBN 0-07-281947-2 • Halldorsson, Arni, Herbert Kotzab & Tage Skjott-Larsen (2003). Inter-organizational theories behind Supply Chain Management – discussion and applications, In Seuring, Stefan et al. (eds.), Strategy and Organization in Supply Chains, Physica Verlag.
Supply chain management • Halldorsson, A., Kotzab, H., Mikkola, J. H., Skjoett-Larsen, T. (2007). Complementary theories to supply chain management. Supply Chain Management: An International Journal, Volume 12 Issue 4, 284-296. • Handfield and Bechtel, 2001; Prater et al., 2001; Kern and Willcocks, 2000; Bowersox and Closs, 1996; Christopher, 1992; Bowersox, 1989 • Hines, T. 2004. Supply chain strategies: Customer driven and customer focused. Oxford: Elsevier. • Kallrath, J., Maindl, T.I. (2006): Real Optimization with SAP® APO. Springer ISBN 3-540-22561-7. • Kaushik K.D., & Cooper, M. (2000). Industrial Marketing Management. Volume29, Issue 1, January 2000, Pages 65–83 • Ketchen Jr., G., & Hult, T.M. (2006). Bridging organization theory and supply chain management: The case of best value supply chains. Journal of Operations Management, 25(2) 573-580. • Kouvelis, P.; Chambers, C.; Wang, H. (2006): Supply Chain Management Research and Production and Operations Management: Review, Trends, and Opportunities. In: Production and Operations Management, Vol. 15, No. 3, pp. 449–469. • Larson, P.D. and Halldorsson, A. (2004). Logistics versus supply chain management: an international survey. International Journal of Logistics: Research & Application, Vol. 7, Issue 1, 17-31. • Movahedi B., Lavassani K., Kumar V. (2009) Transition to B2B e-Marketplace Enabled Supply Chain: Readiness Assessment and Success Factors, The International Journal of Technology, Knowledge and Society, Volume 5, Issue 3, pp. 75–88. • Lavassani K., Movahedi B., Kumar V. (2009) Developments in Theories of Supply Chain Management: The Case of B2B Electronic Marketplace Adoption, The International Journal of Knowledge, Culture and Change Management, Volume 9, Issue 6, pp. 85–98. • Mentzer, J.T. et al. (2001): Defining Supply Chain Management, in: Journal of Business Logistics, Vol. 22, No. 2, 2001, pp. 1–25 • Simchi-Levi D.,Kaminsky P., Simchi-levi E. (2007), Designing and Managing the Supply Chain, third edition, Mcgraw Hill
External links • CIO Magazine's ABCs of supply chain management (http://www.cio.com/article/40940)
38
Manufacturing
Manufacturing Manufacturing is the production of goods for use or sale using labor and machines, tools, chemical and biological processing, or formulation. The term may refer to a range of human activity, from handicraft to high tech, but is most commonly applied to industrial production, in which raw materials are transformed into finished goods on a large scale. Such finished goods may be used for manufacturing other, more complex products, such as aircraft, household appliances or automobiles, or sold to wholesalers, who in turn sell them to retailers, who then sell them to end users – the "consumers". Manufacturing takes turns under all types of economic systems. In a free market economy, manufacturing is usually directed toward the mass production of products for sale to consumers at a profit. In a collectivist economy, manufacturing is more frequently directed by the state to supply a centrally planned economy. In mixed market economies, manufacturing occurs under some degree of government regulation. Modern manufacturing includes all intermediate processes required for the production and integration of a product's components. Some industries, such as semiconductor and steel manufacturers use the term fabrication instead. The manufacturing sector is closely connected with engineering and industrial design. Examples of major manufacturers in North America include General Motors Corporation, General Electric, and Pfizer. Examples in Europe include Volkswagen Group, Siemens, and Michelin. Examples in Asia include Toyota, Samsung, and Bridgestone.
History and development • In its earliest form, manufacturing was usually carried out by a single skilled artisan with assistants. Training was by apprenticeship. In much of the pre-industrial world the guild system protected the privileges and trade secrets of urban artisans. • Before the Industrial Revolution, most manufacturing occurred in rural areas, where household-based manufacturing served as a supplemental subsistence strategy to agriculture (and continues to do so in places). Entrepreneurs organized a number of manufacturing households into a single enterprise through the putting-out system. • Toll manufacturing is an arrangement whereby a first firm with specialized equipment processes raw materials or semi-finished goods for a second firm.
39
Manufacturing
40
Manufacturing systems: changes in methods of manufacturing • Craft or Guild system • • • • • • •
Agile manufacturing American system of manufacturing English system of manufacturing Fabrication Flexible manufacturing Just In Time manufacturing Lean manufacturing
• • • • • • • • •
Mass customization Mass production Ownership Packaging and labeling Prefabrication Putting-out system Rapid manufacturing Reconfigurable manufacturing system Soviet collectivism in manufacturing
Assembly of Section 41 of a Boeing 787 Dreamliner
Industrial policy Economics of manufacturing According to some economists, manufacturing is a wealth-producing sector of an economy, whereas a service sector tends to be wealth-consuming.[1][2] Emerging technologies have provided some new growth in advanced manufacturing employment opportunities in the Manufacturing Belt in the United States. Manufacturing provides important material support for national infrastructure and for national defense. On the other hand, most manufacturing may involve significant social and environmental costs. The clean-up costs of hazardous waste, for example, may outweigh the benefits of a product that creates it. Hazardous materials may expose workers to health risks. Developed countries regulate manufacturing activity with labor laws and environmental laws. Across the globe, manufacturers can be subject to regulations and pollution taxes to offset the environmental costs of manufacturing activities. Labor Unions and craft guilds have played a historic role in the negotiation of worker rights and wages. Environment laws and labor protections that are available in developed nations may not be available in the third world. Tort law and product liability impose additional costs on manufacturing. These are significant dynamics in the on-going process, occurring over the last few decades, of manufacture-based industries relocating operations to "developing-world" economies where the costs of production are significantly lower than in "developed-world" economies. Manufacturing may require huge amounts of fossil fuels. Automobile construction requires, on average, 20 barrels of oil.[3]
Manufacturing
41
Manufacturing and investment Surveys and analyses of trends and issues in manufacturing and investment around the world focus on such things as: • the nature and sources of the considerable variations that occur cross-nationally in levels of manufacturing and wider industrial-economic growth; • competitiveness; and • attractiveness to foreign direct. In addition to general overviews, researchers have examined the features and factors affecting particular key aspects of manufacturing development. They have compared production and investment in a range of Western and non-Western countries and presented case studies of growth and performance in important individual industries and market-economic sectors.[4][5] On June 26, 2009, Jeff Immelt, the CEO of General Electric, called for the United States to increase its manufacturing base employment to 20% of the workforce, commenting that the U.S. has outsourced too much in some areas and can no longer rely on the financial sector and consumer spending to drive demand.[6] Further, while U.S. manufacturing performs well compared to the rest of the U.S. economy, research shows that it performs poorly compared to manufacturing in other high-wage countries.[7] A total of 3.2 million – one in six U.S. manufacturing jobs – have disappeared between 2000 and 2007.[8] In the UK, EEF the manufacturers organisation has led calls for the UK economy to be rebalanced to rely less on financial services and has actively promoted the manufacturing agenda.
Countries by Manufacturing output using the most recent known Data Data is provided by Worldbank.[9][10] It shows the total value of manufacturing in US Dollars for its noted year. Rank
Country/Region World
(Millions of $US) Year 9,963,056
2010
European Union 2,257,019
2010
1
United States
1,771,400
2010
2
China
1,756,621
2010
Eurozone
1,744,073
2010
3
Japan
1,063,593
2010
4
Germany
610,184
2010
5
South Korea
313,429
2011
6
Brazil
308,125
2011
7
Italy
306,196
2010
8
France
253,608
2009
9
Russia
252,125
2011
10
India
238,621
2011
11
United Kingdom 229,615
2010
12
Indonesia
205,632
2011
13
Mexico
202,974
2011
14
Spain
172,433
2009
15
Canada
169,120
2008
Manufacturing
42 16
Turkey
125,825
2011
17
Thailand
113,606
2010
18
Australia
98,344
2010
19
Argentina
84,100
2011
20
Poland
76,438
2010
Manufacturing processes • List of manufacturing processes • Manufacturing Process Management
Theories • Taylorism/Scientific management • Fordism
Control • Management • List of management topics • Total Quality Management • Quality control • Six Sigma
References [3] " World oil supplies are set to run out faster than expected, warn scientists (http:/ / www. independent. co. uk/ news/ science/ world-oil-supplies-are-set-to-run-out-faster-than-expected-warn-scientists-453068. html)". The Independent. June 14, 2007. [4] Manufacturing & Investment Around The World: An International Survey Of Factors Affecting Growth & Performance, ISR Publications/Google Books, revised second edition, 2002. ISBN 978-0-906321-25-6. [6] Bailey, David and Soyoung Kim (June 26, 2009). GE's Immelt says U.S. economy needs industrial renewal (http:/ / www. guardian. co. uk/ business/ feedarticle/ 8578904).UK Guardian.. Retrieved on June 28, 2009. [7] Brookings Institution, Why Does Manufacturing Matter? Which Manufacturing Matters?, February 2012 (http:/ / www. brookings. edu/ ~/ media/ research/ files/ papers/ 2012/ 2/ 22 manufacturing helper krueger wial/ 0222_manufacturing_helper_krueger_wial) [8] " Factory jobs: 3 million lost since 2000 (http:/ / www. usatoday. com/ money/ economy/ 2007-04-20-4155011268_x. htm)". USATODAY.com. April 20, 2007. [9] " Manufacturing, value added (current US$) (http:/ / data. worldbank. org/ indicator/ NV. IND. MANF. CD/ countries/ 1W?order=wbapi_data_value_2010 wbapi_data_value& sort=desc& display=default)". access in February 20, 2013. [10] " Manufacturing, value added (current US$) for EU and Eurozone (http:/ / data. worldbank. org/ indicator/ NV. IND. MANF. CD/ countries/ 1W-EU-XC?display=graph)". access in February 20, 2013.
Manufacturing
43
Sources 1. Kalpakjian, Serope; Steven Schmid (August 2005). Manufacturing, Engineering & Technology. Prentice Hall. pp. 22–36, 951–988. ISBN 0-13-148965-8.
External links • Cato Institute article: Thriving in a Global Economy: The Truth about U.S. Manufacturing and Trade (http:// www.freetrade.org/node/737) • How Everyday Things Are Made (http://manufacturing.stanford.edu): video presentations.Alina • TIME Magazine article on American manufacturing's global effectiveness (http://www.time.com/time/ magazine/article/0,9171,1739309,00.html) • Grant Thornton IBR 2008 Manufacturing industry focus (http://www.internationalbusinessreport.com/files/ ibr2008_manufacturing_lo.pdf) • MFGWatch - Quarterly Survey of North American Manufacturers (http://www.mfg.com/en/mfgwatch/) • - EEF, the manufacturers' organisation - industry group representing uk manufacturers (http://www.eef.org.uk/ default.htm) • - Industry Today - Industrial and Manufacturing Methodologies (http://www.industrytoday.com)
List of ERP software packages Free and Open Source ERP software ERP Package
Language Base
License
Other Info
Developer Country
A1.iO
Java
ATOL
ERP for Public Sector, Campus Management, Healthcare, Logistics A1.iO
Worldwide
Adaxa Suite
Java
GPL
Integrated ERP built on Adempiere
Australia/New Zealand
Adempiere
Java
GPL
started as a fork of Compiere
Spain
Compiere
Java
GPL/Commercial
Acquired by Consona Corporation in June 2010
US
Dolibarr
PHP, MySQL
GPL
EpesiBIM
PHP, MySQL
MIT license
web based application
Poland, USA
ERP5
Python, Zope, MySQL
GPL
based on unified model
Brazil, France, Germany, Japan Sénégal
ERPNEXT
Python, JavaScript, MySQL
GPL
ERP for small and medium businesses
India
Fedena
Ruby, MySQL
Apache License
ERP for Schools/Universities
India
FrontAccounting PHP, MySQL
GPLv3
Web-Based system
GNU Enterprise
Python
GPLv3
HeliumV
Java
AGPL
JFire
Java
LGPL
ERP for small and medium businesses
Austria, Germany
List of ERP software packages
44
Kuali Foundation
Java
ECL
for higher education, by higher education
LedgerSMB
Perl, PostgreSQL
GPL
started as a fork of SQL-Ledger in 2006
OFBiz
Apache, Java
Apache License 2.0
ERP for small and medium businesses
Openbravo
Java
Openbravo Public License (OBPL), a free software license based on the Mozilla Public License (MPL)
OpenERP
Python, PostgreSQL
AGPLv3
OpenERP version 7.0 was released on Belgium, India, USA 12/21/12, OpenERP was formerly known as Tiny ERP
Phreedom
PHP, Javascript, MySQL
GPLv3
Expanded from Phreebooks accounting engine
Postbooks
C++, JavaScript, CPAL PostgreSQL
SQL-Ledger
Perl, PostgreSQL
GPL
Tryton
Python
GPLv3
started as a fork of OpenERP
WebERP
PHP, MySQL
GPLv2
LAMP based system
Worldwide
Spain
Produced by XTuple, uses Qt framework
Proprietary ERP software • • • • • • •
1C:Enterprise from 1C Company 24SevenOffice Start, Premium, Professional and Custom from 24SevenOffice A1.iO from Alliance Technologies A1 Academia from Alliance Technologies abas Business Software from ABAS Software AG Access SupplyChain from the Access Group Accpac from The Sage Group
• • • • • • • • • • • • • • • • • •
Activant acquired by Epicor Acumatica Cloud ERP from Acumatica AddonSoftware from BASIS International Agresso Business World from Unit4 AIVA 9001 from AIVA SISTEMA AXIS ERP from Consona Corporation Baan ERP from Infor Global Solutions AMS Advantage from CGI Group (formerly American Management Systems) BatchMaster ERP from BatchMaster Software CGram Enterprise from CGram Software Cimnet Systems from Consona Corporation Ciright ERP from Ciright Systems Clear Enterprise from Clear Objective COA Solutions Ltd - Smart Business Suite Coda Financials from Unit4 Comarch Altum from Comarch Comarch Semiramis from Comarch Compass ERP from Transtek
USA
List of ERP software packages • • • • • • • • • • • • • • • • •
Compiere professional edition from Consona Corporation DEACOM ERP from Deacom EFACS from Exel Computer Systems and RAD Software. Encompix ERP from Consona Corporation ENFOS Epicor Enterprise from Epicor Exact MAX from Exact Software Exact Macola ES from Exact Software FinancialForce Accounting from FinancialForce.com FinancialForce Professional Services Automation (aka PSA) from FinancialForce.com Fishbowl Inventory from Fishbowl Greentree Business Software from Greentree International IFS Applications from Industrial and Financial Systems Ignition MES and OEE Module Inductive Automation Infor10 Barcode from Infor Global Solutions Infor10 Discrete iEnterprise (XA) (aka MAPICS) from Infor Global Solutions Infor10 Distribution Business (aka SX.Enterprise) from Infor Global Solutions
• • • • • • • • • •
Infor10 Distribution Express (aka FACTS) from Infor Global Solutions Infor10 ERP Business (aka SyteLine) from Infor Global Solutions Infor10 ERP Visual (aka Visual Enterprise) Infor Global Solutions Infor10 ERP Process Business (aka Adage) from Infor Global Solutions Infor ERP Blending (aka BLENDING) from Infor Global Solutions Intacct Intacct and Intacct Accountant Edition Intuitive ERP from Consona Corporation IRIS Exchequer from IRIS Software JD Edwards EnterpriseOne from Oracle JD Edwards World from Oracle
• • • • • • • • • • • • • • • • • •
Jeeves from Jeeves Information Systems AB JustFoodERP from IndustryBuilt Software Corp. kVASy4 from SIV.AG Log-net from LOG-NET, Inc. Maximo (MRO) from IBM Made2Manage ERP from Consona Corporation MECOMS from Ferranti Computer Systems Microsoft Dynamics AX (formerly Axapta) from Microsoft Microsoft Dynamics GP (formerly Great Plains) from Microsoft Microsoft Dynamics NAV (formerly Navision) from Microsoft Microsoft Dynamics SL (formerly Solomon) from Microsoft Momentum from CGI Group mySAP from SAP MyWorkPLAN from Sescoi NAV-X from Microsoft and NAV-X LLC NetSuite from NetSuite Inc. Openda QX from Openda OpenMFG from xTuple
• Opera (I, II and 3) from Pegasus • Oracle E-Business Suite from Oracle
45
List of ERP software packages • • • • • • • • • • • • • • • • •
Oracle Fusion from Oracle OSAS from Open Systems Accounting Software PeopleSoft from Oracle Plex Online from Plex Systems ProfitKey from ProfitKey International Pronto Software from Pronto Software Quintiq QAD Enterprise Applications (formerly MFG/Pro) from QAD Inc Ramco Enterprise Series 4.x from Ramco Systems Ramco e.Applications from Ramco Systems Ramco On Demand ERP from Ramco Systems Rapid Response Manufacturing from ProfitKey International TeamWox from MetaQuotes Software corp. SAGE PFW ERP from The Sage Group SAGE PRO ERP from The Sage Group SAGE ERP 100 from The Sage Group SAGE ERP 300 from The Sage Group
• • • • • • • • • • • • • • • • • •
SAGE ERP 500 from The Sage Group Sage ERP X3 from The Sage Group SAP Business All-in-One from SAP SAP Business ByDesign from SAP SAP Business One from SAP SAP Business Suite from SAP SohoOS SYSPRO from Syspro Tally.ERP 9 from Tally Solutions Technology One from Technology One TradeXpress from TradeCard TRAVERSE from Open Systems Accounting Software UFIDA NC from UFIDA UFIDA ERP-U8 All-in-one from UFIDA UFIDA U9 from UFIDA Visibility.net from Visibility Workday from Workday, Inc. WorkPLAN Enterprise from Sescoi
46
ABAP
47
ABAP ABAP/4 Paradigm(s)
Object-oriented, structured, imperative
Appeared in
1983
Designed by
SAP AG
Typing discipline
Static, strong, safe, nominative
Major implementations SAP R/2, SAP R/3 Influenced by
Objective-C[citation needed], COBOL[citation needed], SQL[citation needed]
OS
Cross-platform
•
[1]
ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report creation processor"[2]) is a high-level programming language created by the German software company SAP. It is currently positioned, alongside the more recently introduced Java, as the language for programming the SAP Application Server, part of its NetWeaver platform for building business applications. The syntax of ABAP is somewhat similar to COBOL.Wikipedia:ASF#A_simple_formulation[citation needed]
Introduction ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting. ABAP used to be an abbreviation of Allgemeiner BerichtsAufbereitungsProzessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s). The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language is fairly easy to learnWikipedia:ASF#A_simple_formulation for programmers but it is not a tool for direct use by non-programmers. Knowledge of relational database design and preferably also of object-oriented concepts is necessary to create ABAP programs. ABAP remains as the language for creating programs for the client-server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6. SAP's current development platform NetWeaver supports both ABAP and Java.
ABAP
ABAP runtime environment All ABAP programs reside inside the SAP database. They are not stored in separate external files like Java or C++ programs. In the database all ABAP code exists in two forms: source code, which can be viewed and edited with the ABAP Workbench tools; and generated code, a binary representation somewhat comparable with Java bytecode. ABAP programs execute under the control of the runtime system, which is part of the SAP kernel. The runtime system is responsible for processing ABAP statements, controlling the flow logic of screens and responding to events (such as a user clicking on a screen button); in this respect it can be seen as a Virtual Machine comparable with the Java VM. A key component of the ABAP runtime system is the Database Interface, which turns database-independent ABAP statements ("Open SQL") into statements understood by the underlying DBMS ("Native SQL"). The database interface handles all the communication with the relational database on behalf of ABAP programs; it also contains extra features such as buffering of tables and frequently accessed data in the local memory of the application server.
SAP Basis The ABAP language environment, including the syntax checking, code generation, and runtime system, is part of the SAP Basis component/layer. SAP Basis is the technological platform that supports the entire range of SAP applications, now typically implemented in the framework of the SAP Web Application Server. In that sense SAP Basis can be seen as the virtual machine on which SAP applications run. Like any operating system, SAP Basis contains both low-level services (for example memory management, database communication, or servicing Web requests) and high-level tools for end users and administrators. These tools can be executables ("SAP kernel") running directly on the underlying operating system, transactions developed in ABAP, or Web-based programs. SAP Basis also provides a layer of abstraction between the business applications, the operating system and database. This ensures that applications do not depend directly upon a specific server or database platform and can easily be ported from one platform to another. SAP Basis currently runs on UNIX (AIX, HP-UX, Solaris, Linux), Microsoft Windows, i5/OS on IBM System i (formerly iSeries, AS/400), and z/OS on IBM System z (formerly zSeries, S/390). Supported databases are IBM DB2, Informix, MaxDB, Oracle, and Microsoft SQL Server (support for Informix was discontinued in SAP Basis release 7.00).
SAP systems and landscapes All SAP data exists and all SAP software runs in the context of an SAP system. A system consists of a central relational database and one or more application servers ("instances") accessing the data and programs in this database. A SAP system contains at least one instance but may contain more, mostly for reasons of sizing and performance. In a system with multiple instances, load balancing mechanisms ensure that the load is spread evenly over the available application servers. Installations of the Web Application Server (landscapes) typically consist of three systems: one for development; one for testing and quality assurance; and one for production. The landscape may contain more systems (e.g., separate systems for unit testing and pre-production testing) or it may contain fewer (e.g., only development and production, without separate QA); nevertheless three is the most common configuration. ABAP programs are created and undergo first testing in the development system. Afterwards they are distributed to the other systems in the landscape. These actions take place under control of the Change and Transport System (CTS), which is responsible for concurrency control (e.g., preventing two developers from changing the same code at the same time), version management, and deployment of programs on the QA and production systems. The Web Application Server consists of three layers: the database layer; the application layer; and the presentation layer. These layers may run on the same or on different physical machines. The database layer contains the relational database and the database software. The application layer knowledge contains the instance or instances of
48
ABAP
49
the system. All application processes, including the business transactions and the ABAP development, run on the application layer. The presentation layer handles the interaction with users of the system. Online access to ABAP application servers can go via a proprietary graphical interface, which is called "SAP GUI", or via a Web browser.
Transactions A transaction in SAP terminology is the execution of a program. The normal way of executing ABAP code in the SAP system is by entering a transaction code (for instance, VA01 is the transaction code for "Create Sales Order"). Transactions can be called via system-defined or user-specific, role-based menus. They can also be started by entering the transaction code directly into a command field, which is present in every SAP screen. Transactions can also be invoked programmatically by means of the ABAP statements CALL TRANSACTION and LEAVE TO TRANSACTION. The term "transaction" must not be misunderstood here; in the context just described, a transaction simply means calling and executing an ABAP program. In application programming, "transaction" often refers to an indivisible operation on data, which is either committed as a whole or undone (rolled back) as a whole. This concept exists in SAP and is called a LUW (Logical Unit of Work). In the course of one transaction (program execution), there can be different LUWs. Transaction for ABAP Workbench could be invoked using transaction code SE80 to work on all ABAP development related activities.[citation needed]
Types of ABAP programs As in other programming languages, an ABAP program is either an executable unit or a library, which provides reusable code to other programs and is not independently executable. ABAP distinguishes two types of executable programs: • Reports • Module pools Reports follow a relatively simple programming model whereby a user optionally enters a set of parameters (e.g., a selection over a subset of data) and the program then uses the input parameters to produce a report in the form of an interactive list. The term "report" can be somewhat misleading in that reports can also be designed to modify data; the reason why these programs are called reports is the "list-oriented" nature of the output they produce. Module pools define more complex patterns of user interaction using a collection of screens. The term “screen” refers to the actual, physical image that the user sees. Each screen also has a "flow logic", which refers to the ABAP code implicitly invoked by the screens. Each screen has its own flow logic, which is divided into a "PBO" (Process Before Output) and "PAI" (Process After Input) section. In SAP documentation the term “dynpro” (dynamic program) refers to the combination of the screen and its flow logic. The non-executable program types are: • • • • • •
INCLUDE modules Subroutine pools Function groups Object classes Interfaces Type pools
An INCLUDE module gets included at generation time into the calling unit; it is often used to subdivide very large programs. Subroutine pools contain ABAP subroutines (blocks of code enclosed by FORM/ENDFORM statements and invoked with PERFORM). Function groups are libraries of self-contained function modules (enclosed by FUNCTION/ENDFUNCTION and invoked with CALL FUNCTION). Object classes and interfaces are similar to Java classes and interfaces; the first define a set of methods and attributes, the second contain "empty" method
ABAP definitions, for which any class implementing the interface must provide explicit code. Type pools define collections of data types and constants.
ABAP Workbench The ABAP Workbench contains different tools for editing programs. The most important of these are (transaction codes are shown in parentheses): • ABAP Editor for writing and editing reports, module pools, includes and subroutine pools (SE38) • ABAP Dictionary for processing database table definitions and retrieving global types (SE11) • Menu Painter for designing the user interface (menu bar, standard toolbar, application toolbar, function key assignment) (SE41) • Screen Painter for designing screens and flow logic (SE51) • Function Builder for function modules (SE37) • Class Builder for ABAP Objects classes and interfaces (SE24) The Object Navigator (transaction SE80) provides a single integrated interface into these various tools.
ABAP Dictionary The ABAP Dictionary contains all metadata about the data in the SAP system. It is closely linked with the ABAP Workbench in that any reference to data (e.g., a table, a view, or a data type) will be obtained from the dictionary. Developers use the ABAP Dictionary transactions (directly or through the SE80 Object Navigator inside the ABAP Workbench) to display and maintain this metadata. When a dictionary object is changed, a program that references the changed object will automatically reference the new version the next time the program runs. Because ABAP is interpreted, it is not necessary to recompile programs that reference changed dictionary objects. A brief description of the most important types of dictionary objects follows: • Tables are data containers that exist in the underlying relational database. In the majority of cases there is a 1-to-1 relationship between the definition of a table in the ABAP Dictionary and the definition of that same table in the database (same name, same columns). These tables are known as "transparent". There are two types of non-transparent tables: "pooled" tables exist as independent entities in the ABAP Dictionary but they are grouped together in large physical tables ("pools") at the database level. Pooled tables are often small tables holding for example configuration data. "Clustered" tables are physically grouped in "clusters" based on their primary keys; for instance, assume that a clustered table H contains "header" data about sales invoices, whereas another clustered table D holds the invoice line items. Each row of H would then be physically grouped with the related rows from D inside a "cluster table" in the database. This type of clustering, which is designed to improve performance, also exists as native functionality in some, though not all, relational database systems. • Indexes provide accelerated access to table data for often used selection conditions. Every SAP table has a "primary index", which is created implicitly along with the table and is used to enforce primary key uniqueness. Additional indexes (unique or non-unique) may be defined; these are called "secondary indexes". • Views have the same purpose as in the underlying database: they define subsets of columns (and/or rows) from one or - using a join condition - several tables. View is actually a virtual table which does not contain data physically. Views take very short memory space in database because the views contain only the definition of data. • Structures are complex data types consisting of multiple fields (comparable to struct in C/C++). • Data elements provide the semantic content for a table or structure field. For example, dozens of tables and structures might contain a field giving the price (of a finished product, raw material, resource, ...). All these fields could have the same data element "PRICE".
50
ABAP • Domains define the structural characteristics of a data element. For example, the data element PRICE could have an assigned domain that defines the price as a numeric field with two decimals. Domains can also carry semantic content in providing a list of possible values. For example, a domain "BOOLEAN" could define a field of type "character" with length 1 and case-insensitive, but would also restrict the possible values to "T" (true) or "F" (false). • Search helps (successors to the now obsolete "matchcodes") provide advanced search strategies when a user wants to see the possible values for a data field. The ABAP runtime provides implicit assistance (by listing all values for the field, e.g. all existing customer numbers) but search helps can be used to refine this functionality, e.g. by providing customer searches by geographical location, credit rating, etc. • Lock objects implement application-level locking when changing data.
ABAP syntax This brief description of the ABAP syntax begins inevitably with the ubiquitous "Hello World" program.
"Hello World" REPORT TEST. WRITE 'Hello World'. This example contains two statements: REPORT and WRITE. The program displays a list on the screen. In this case, the list consists of the single line "Hello World". The REPORT statement indicates that this program is a report. An alternative statement, PROGRAM, would be used for a module pool.
Chained statements Consecutive statements with an identical first (leftmost) part can be combined into a "chained" statement using the chain operator ":" (colon). The common part of the statements is written to the left of the colon, the differing parts are written to the right of the colon and separated by commas. The colon operator is attached directly to the preceding token, without a space (the same applies to the commas in the token list on, as can be seen in the examples below). Chaining is very often used in WRITE statements. WRITE accepts just one argument, so if for instance you wanted to display three fields from a structure called FLIGHTINFO, you would have to code: WRITE FLIGHTINFO-CITYFROM. WRITE FLIGHTINFO-CITYTO. WRITE FLIGHTINFO-AIRPTO. Chaining the statements results in a more readable and more intuitive form: WRITE: FLIGHTINFO-CITYFROM, FLIGHTINFO-CITYTO, FLIGHTINFO-AIRPTO. In a chain statement, the first part (before the colon) is not limited to the statement name alone. The entire common part of the consecutive statements can be placed before the colon. Example: REPLACE 'A' WITH 'B' INTO LASTNAME. REPLACE 'A' WITH 'B' INTO FIRSTNAME. REPLACE 'A' WITH 'B' INTO CITYNAME. could be rewritten in chained form as: REPLACE 'A' WITH 'B' INTO: LASTNAME, FIRSTNAME, CITYNAME.
51
ABAP
52
Comments ABAP has 2 ways of defining text as a comment: • An asterisk (*) in the leftmost column of a line makes the entire line a comment • A double quotation mark (") anywhere on a line makes the rest of that line a comment Example: *************************************** ** Program: BOOKINGS ** ** Author: Joe Byte, 07-Jul-2007 ** *************************************** REPORT BOOKINGS. * Read flight SELECT * FROM WHERE CLASS OR CLASS (...)
bookings from the database FLIGHTINFO = 'Y' "Y = economy = 'C'. "C = business
Data types and variables ABAP provides a set of built-in data types. In addition, every structure, table, view or data element defined in the ABAP Dictionary can be used to type a variable. Also, object classes and interfaces can be used as types. The built-in data types are: Type
Description
I
Integer (4-bytes)
P
Packed decimal
F
Floating point
N
Character numeric
C
Character
D
Date
T
Time
X
Hexadecimal (raw byte)
STRING
Variable-length string
XSTRING Variable-length raw byte array
Date variables or constants (type D) contain the number of days since January 1, 1 AD. Time variables or constants (type T) contain the number of seconds since midnight. A special characteristic of both types is that they can be accessed both as integers and as character strings (with internal format "YYYYMMDD" for dates and "hhmmss" for times), which makes date/time handling very easy. For example, the code snippet below calculates the last day of the previous month (note: SY-DATUM is a system-defined variable containing the current date): DATA LAST_EOM
TYPE D.
* Start from today's date LAST_EOM = SY-DATUM.
"last end-of-month date
ABAP
53
* Set characters 6 and 7 (0-relative) of the YYYYMMDD string to "01", * giving the first day of the current month LAST_EOM+6(2) = '01'. * Subtract one day LAST_EOM = LAST_EOM - 1. WRITE: 'Last day of previous month was', LAST_EOM. All ABAP variables must be explicitly declared in order to be used. Normally all declarations are placed at the top of the code module (program, subroutine, function) before the first executable statement; this placement is a convention and not an enforced syntax rule. The declaration consists of the name, type, length (where applicable), additional modifiers (e.g. the number of implied decimals for a packed decimal field) and optionally an initial value: * Primitive types: DATA: COUNTER VALIDITY TAXRATE(3) LASTNAME(20) DESCRIPTION
TYPE TYPE TYPE TYPE TYPE
I, I VALUE 60, P DECIMALS 1, C, STRING.
* Dictionary types: DATA: ORIGIN TYPE COUNTRY. * Internal table: DATA: T_FLIGHTS T_LOOKUP
TYPE TABLE OF FLIGHTINFO, TYPE HASHED TABLE OF FLT_LOOKUP.
* Objects: DATA: BOOKING
TYPE REF TO CL_FLT_BOOKING.
Notice the use of the colon to chain together consecutive DATA statements.
ABAP Objects The ABAP language supports object-oriented programming, through a feature known as "ABAP Objects".[3] This helps to simplify applications and make them more controllable. ABAP Objects is fully compatible with the existing language, so one can use existing statements and modularization units in programs that use ABAP Objects, and can also use ABAP Objects in existing ABAP programs. Syntax checking is stronger in ABAP Objects programs, and some syntactical forms (usually older ones) of certain statements are not permitted.
ABAP
ABAP statements – an overview In contrast with languages like C/C++ or Java, which define a limited set of language-specific statements and provide most functionality via libraries, ABAP contains an extensive body of built-in statements. These statements often support many options, which explains why ABAP programs look "verbose", especially when compared with programs written in C, C++ or Java. This section lists some of the most important statements in the language, subdivided by function. Both the statements listed here and the subdivision used are fairly arbitrary and by no means exhaustive.
Declarative statements These statements define data types or declare data objects which are used by the other statements in a program or routine. The collected declarative statements in a program or routine make up its declaration part. Examples of declarative statements: TYPES, DATA, CONSTANTS, PARAMETERS, SELECT-OPTIONS, TABLES
Modularization statements These statements define the processing blocks in an ABAP program. The modularization statements can be further divided into event statements and defining statements: Event statements These are used to define the beginning of event processing blocks. There are no special statements to mark the end of such blocks - they end when the next processing block is introduced. Examples of event keywords are: LOAD OF PAGE,INITIALIZATION,AT SELECTION SCREEN OUTPUT,AT SELECTION SCREEN ON FIELD, AT SELECTION SCREEN ON BLOCK, AT SELECTION SCREEN, START-OF-SELECTION,END-OF-SELECTION, AT USER-COMMAND, AT LINE-SELECTION,GET,GET LATE,AT USER COMMAND, AT LINE SELECTION
Defining statements These statements delineate callable code units such as subroutines, function modules and methods. The statement marking the end of the unit has the name of the opening statement prefixed with "END". Examples of defining keywords: FORM ..... ENDFORM, FUNCTION ... ENDFUNCTION, MODULE ... ENDMODULE, METHOD ... ENDMETHOD
Control statements These statements control the flow of the program within a processing block. Statements controlling conditional execution are: IF ... ELSEIF ... ELSE ... ENDIF CASE ... WHEN ... ENDCASE CHECK The CHECK statement verifies a condition and exits the current processing block (e.g. loop or subroutine) if the condition is not satisfied. Several statements exist to define a loop:
54
ABAP
55
DO ... ENDDO WHILE ... ENDWHILE LOOP ... ENDLOOP DO/ENDDO defines an unconditional loop. An exit condition (typically in the form "IF . EXIT. ENDIF.") must be provided inside the body of the loop. A variant (DO TIMES) sets as exit condition the number of times the loop body is executed. WHILE/ENDWHILE defines a conditional loop. The condition is tested at the beginning of the loop. LOOP/ENDLOOP loops over the lines of an internal table. The loop ends after processing the last line of the internal table.
Call statements These statements call processing blocks defined using the corresponding modularization statements. The blocks can either be in the same ABAP program or in a different program. Examples of call keywords: PERFORM, CALL METHOD, CALL TRANSACTION, CALL SCREEN, SUBMIT, LEAVE TO TRANSACTION, CALL FUNCTION
Operational statements These statements retrieve or modify the contents of variables. A first group of operational statements assign or change a variable: MOVE, ADD, SUBTRACT, DIVIDE These statements, whose syntax originates in COBOL, can be written in a shorter form that uses operators rather than keywords: MOVE LASTNAME TO RECIPIENT. * is equivalent to RECIPIENT = LASTNAME. ADD TAX TO PRICE. * is equivalent to PRICE = PRICE + TAX. Examples of operational statements on character strings: SEARCH, REPLACE, CONCATENATE, CONDENSE Database access statements (Open SQL): SELECT, INSERT, UPDATE, DELETE, MODIFY Statements working on internal tables (notice that some "SQL" statements can also be used here): READ TABLE, LOOP AT, INSERT, DELETE, MODIFY, SORT, DELETE ADJACENT DUPLICATES, APPEND, CLEAR, REFRESH, FREE
ABAP
Internal tables in ABAP Internal tables are an extremely important feature of the ABAP language. An internal table is defined as a vector of structs in C++ or a vector of objects in Java. The main difference with these languages is that ABAP provides a collection of statements to easily access and manipulate the contents of internal tables. Note that ABAP does not support arrays; the only way to define a multi-element data object is to use an internal table.[citation needed] Internal tables are a way to store variable datasets of a fixed structure in the working memory of ABAP, and provides the functionality of dynamic arrays. The data is stored on a row-by-row basis, where each row has the same structure. Internal tables are preferably used to store and format the content of database tables from within a program. Furthermore, internal tables in connection with structures are the most important means of defining very complex data structures in an ABAP program. Following example define an internal table with two fields with the format of database table VBRK. Obsolete way: * Define internal table with header line DATA : BEGIN OF I_VBRK OCCURS 0, VBELN LIKE VBRK-VBELN, ZUONR LIKE VBRK-ZUONR, END OF I_VBRK. Current way (from about version 4.6 and up): * First define structured type TYPES: BEGIN OF t_vbrk, VBELN TYPE VBRK-VBELN, ZUONR TYPE VBRK-ZUONR, END OF t_vbrk. * Now define internal table of our defined type t_vbrk DATA : gt_vbrk TYPE STANDARD TABLE OF t_vbrk, gt_vbrk_2 TYPE STANDARD TABLE OF t_vbrk. "easy to define more tables * If needed, define structure (line of internal table) DATA : gs_vbrk TYPE t_vbrk. * You can also define table type if needed TYPES tt_vbrk TYPE STANDARD TABLE OF t_vbrk.
56
ABAP
References Online ABAP training [4] [1] http:/ / scn. sap. com/ community/ abap [2] "ABAP History". SAP-technical.com (http:/ / www. sap-technical. com/ content/ abap/ 1ABAP History. htm) [3] "Classes". SAP NetWeaver 7.0. (http:/ / help. sap. com/ saphelp_nw70/ helpdata/ en/ c3/ 225b5c54f411d194a60000e8353423/ frameset. htm) accessed 10 August 2009. [4] http:/ / onlinewebdynprotraining. com/
External links • SAP Help Portal (http://help.sap.com) • ABAP Development (http://scn.sap.com/community/abap) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome) • ABAP Objects (http://help.sap.com/saphelp_nw2004s/helpdata/en/ce/b518b6513611d194a50000e8353423/ frameset.htm) • ABAP (http://www.dmoz.org/Computers/Software/ERP/SAP/Programming/) at the Open Directory Project Learn ABAP (http://onlinewebdynprotraining.com/)
57
SAP AG
58
SAP AG SAP SE
Type
Aktiengesellschaft
Traded as
ISIN: DE0007164600
Industry
Enterprise software
Founded
Weinheim, Germany (1972)
Founder(s)
Dietmar Hopp Hans-Werner Hector Hasso Plattner Klaus Tschira Claus Wellenreuther
Headquarters
Walldorf, Germany
Area served
Worldwide
Key people
Hasso Plattner (Chairman) Jim H. Snabe (Co-CEO) Bill McDermott (Co-CEO)
Products
See list of SAP products.
Revenue
€16.22 billion (2012)
Operating income
€4.064 billion (2012)
Profit
€2.826 billion (2012)
Total assets
€26.87 billion (2012)
Total equity
€14.16 billion (2012)
[1]
FWB: SAP
[2]
NYSE: SAP
[3]
[] [] [] [] []
[]
Employees
61,344 (2012)
Website
SAP.com
[4]
SAP AG is a German multinational software corporation that makes enterprise software to manage business operations and customer relations. Headquartered in Walldorf, Baden-Württemberg, Germany, with regional offices around the world, SAP is in the market of enterprise applications in terms of software and software-related service.[5] The company's best-known software products are its enterprise resource planning application systems and management (SAP ERP), its enterprise data warehouse product – SAP Business Warehouse (SAP BW), SAP BusinessObjects software, and most recently, Sybase mobile products and in-memory computing appliance SAP HANA. SAP is one of the largest software companies in the world.
SAP AG
History Foundation When Xerox decided to exit the computer industry, they asked IBM to migrate their business systems to IBM technology. As part of IBM's compensation for the migration, IBM was given the rights to the SDS/SAPE software, reportedly for a contract credit of $80,000. Five IBM engineers from the AI departament [][6] (Dietmar Hopp, Klaus Tschira, Hans-Werner Hector, Hasso Plattner, and Claus Wellenreuther, all from Mannheim, Baden-Württemberg) were working in an enterprise wide system based in this software, only to be told that it would be no longer necessary. Rather than abandon the project, they decided to leave IBM and start another company.[7] They had 8% in IBM founding. In exchange for this stock, they got the rights to the SAPE software as part of the deal.[citation needed]. In June 1972 they founded Systemanalyse und Programmentwicklung ("System Analysis and Program Development") as a private partnership under the German Civil Code.[7] The acronym was later changed to stand for Systeme, Anwendungen und Produkte in der Datenverarbeitung ("Systems, Applications and Products in Data Processing"). Their first client was the German branch of Imperial Chemical Industries[8] in Östringen, where they developed mainframe programs for payroll and accounting. Instead of storing the data on punch cards mechanically, as IBM did, they stored it locally. Therefore, they called their software a real-time system, since there was no need to process the punch cards overnight (for this reason their flagship product carried a R in its name until the late 1990s). This first version was also a stand alone software, that could be offered to other interested parties.
The ERP In 1973, their first commercial product was launched. The SAP R/1, as it was called,[] offered a common system for multiple tasks. This permitted the use of a centralized data storage, improving the maintenance of the data. From a technical point of view, therefore, a database was necessary.[9] In 1976, "SAP GmbH" was founded, and moved its headquarters the following year to Walldorf. SAP AG became the company's official name after the 2005 annual general meeting. AG is short for Aktiengesellschaft (corporation). Three years later, in 1979, SAP launched SAP R/2, expanding the capabilities of the system to other areas, such as material management and production planning.[] In 1981, SAP brought a re-designed product to market. However, SAP R/2 did not improve until the period between 1985 and 1990. SAP developed and released several versions of R/3 in 1992 through 1995. By the mid-1990s, SAP followed the trend from mainframe computing to client/server architectures. The development of SAP’s internet strategy with mySAP.com redesigned the concept of business processes (integration via Internet).[7] SAP was awarded Industry Week’s Best Managed Companies in 1999.[10]
Corporate Reestructuring In August 1988, SAP GmbH transferred into SAP AG (a corporation by German law), and public trading started 4 November. Shares were listed on the Frankfurt and Stuttgart stock exchanges.[7] In 1995, SAP was included in the German stock index DAX. On 22 September 2003, SAP was included in the Dow Jones STOXX 50.[11] in November 2010, SAP lost a $1.3 billion intellectual property lawsuit (related to the actions of the SAP subsidiary TomorrowNow) to Oracle Corporation – cited as the largest software piracy judgment in history.[12] SAP filed post-trial motions to lower the damage awarded to Oracle and stated it may also file an appeal.[13] On 9 September 2011, the verdict was overturned by Judge Phyllis J. Hamilton, who called the penalty "grossly excessive."[14]
59
SAP AG
60
Acquisitions In 2008, SAP acquired Business Objects, a business intelligence company, and added its products to its portfolio. In 2010, SAP acquired Sybase in a major acquisition move. Sybase being the largest business software and service provider specializing in information management and mobile data use. In December 2011, SAP AG agreed to buy SuccessFactors Inc. for $3.4 billion in cash or 52 percent more than the share closing price on 2 December 2011. With the acquisition, SAP AG will become more competitive with Oracle Corp. in the Cloud computing market.[15] In May 2012, SAP AG announced acquisition of the Sunnyvale, California-based supply chain network operator Ariba Inc. for an estimated $4.3 billion dollars. SAP said it will offer $45 a share. The acquisition is assumed to be completed in the third quarter 2012, subject to approval by Ariba shareholders and regulators.[16]
Partnerships with educational institutions Through its University Alliance Program, launched in 1988, SAP donates licenses to over 1,200 UAP member institutions and fully outfits their professors to provide students in-depth, hands-on experience with SAP products. Currently more than 250,000 students around the world have access to an SAP system through the SAP University Alliances.
Business and markets SAP is the world's largest business software company and the third-highest revenue independent software provider (as of 2007).[17] It operates in four geographic regions: EMEA (Europe, Middle East, Africa), America (United States and Canada), LAC (Latin America and Caribbean), and APJ (Asia Pacific and Japan), which represents Japan, Korea, Australia and New Zealand, India, Greater China, and Southeast Asian countries. In addition, SAP operates a network of 115 subsidiaries, and has R&D (Research & Development) facilities around the globe in Germany, India, US, Canada, France, Brazil, Turkey, China, Hungary, Israel, Ireland and Bulgaria.
SAP AG Headquarters, Walldorf.
SAP focuses on six industry sectors: process industries, discrete industries, consumer industries, service industries, financial services, and public services.[18] It offers integrated product sets for large enterprises[19] and midsize companies and small businesses.[20]
SAP AG
Enterprise Service-Oriented Architecture Service-oriented architecture moves the ERP (Enterprise Resource Planning) landscape toward software-based and web services-based business activities. This move increases adaptability, flexibility, openness, and efficiency. The move towards E-SOA helps companies reuse software components and not rely as much on in-house ERP hardware technologies, which makes ERP adoption more attractive to small and mid-sized companies.
E-SOA Authentication SAP E-SOA, client certificate-based authentication is the only authentication method (besides username/password) and the only Single Sign-On method to be supported across all SAP technologies. Kerberos and logon tickets, for example, are not compatible with SAP service-oriented architecture.[21]
Products SAP's products focus on Enterprise Resource Planning (ERP). The company's main product is SAP ERP. The current version is SAP ERP 6.0 and is part of the SAP Business Suite. Its previous name was R/3. The "R" of SAP R/3 stood for realtime. The number 3 related to the 3-tier architecture: database, application server and client (SAPgui). R/2, which ran on a Mainframe architecture, was the predecessor of R/3. Before R/2 came System RF, later dubbed R/1. SAP ERP is one of five enterprise applications in SAP's Business Suite. The other four applications are: • Customer Relationship Management (CRM) – helps companies acquire and retain customers, gain marketing and customer insight • Product Lifecycle Management (PLM) – helps manufacturers with product-related information • Supply Chain Management (SCM) – helps companies with the process of resourcing its manufacturing and service processes • Supplier Relationship Management (SRM) – enables companies to procure from suppliers Other major product offerings include: the NetWeaver platform, Governance, Risk and Compliance (GRC) software, Duet (joint offering with Microsoft), Performance Management software and RFID. SAP offers service-oriented architecture capabilities (calling it Enterprise SOA) in the form of web services that are wrapped around its applications. While its original products were typically used by Fortune 500 companies[citation needed], SAP now actively targets small and medium sized enterprises (SME) with its SAP Business One and SAP Business All-in-One. On 19 September 2007 SAP announced a new product named SAP Business ByDesign. SAP Business ByDesign is a software as a service (SaaS) offering, and provides a fully integrated enterprise resource planning (ERP) software, On Demand. SAP Business ByDesign was previously known under the code name "A1S".[22] In October 2007, SAP AG announced the friendly takeover of Business Objects. This acquisition expanded SAP's Product Suite of Business Intelligence (BI) software and increased the customer installed base to 89,000.[23] In February 2009 SAP AG, which invested in Coghead, purchased the start-up’s intellectual property. SAP will only be using the company’s technology as an internal resource and has no plans to offer Coghead’s products to its customers.[24] In May 2010 SAP AG announced that it is buying the database software maker Sybase for US$ 5.8 billion in cash.[25] The deal closed at the end of July 2010. Sybase will continue to run as a separate, independent unit but will be leveraged across the other SAP areas. As of July 2010 TechniData is a 100% subsidiary of SAP AG.[26] In October 2010, SAP AG announced the release of SAP HANA 1.0 (High-performance Analytics Appliance), an in-memory appliance for Business Intelligence allowing real-time analytics.
61
SAP AG SAP Enterprise Learning (environment) is an enhancement of the previous version of the learning management system, SAP Learning Solution 600. Apart from the features in SAP Learning Solution 600, SAP Enterprise Learning (environment) contains a virtual learning room feature powered by Adobe Connect.[27] SAP officials say there are over 100,600 SAP installations serving more than 41,200 companies in more than 25 industries in more than 120 countries.[28] SAP Press has published a book on SAP Enterprise Learning.[29] SAP Human Resources Management System is one of the largest modules in the SAP R/3 system which consists of many sub modules that assist with tasks of human resource management.
Partnerships SAP partners include Global Services Partners with cross-industry multinational consulting capabilities,[30] Global Software Partners providing integrated products that complement SAP Business Suite solutions,[31] and Global Technology Partners providing user companies with a wide range of products to support SAP technology, including vendors of hardware, database, storage systems, networks, and mobile computing technology.[32] Extensions partners: this is a small number of companies which provide functionality that complements SAP product capabilities. These enhancements fulfill high quality standards and are certified, sold and supported by SAP directly. These partner companies include Adobe, CA Technologies, Hewlett-Packard, IDS Scheer, OpenText, Redwood Software, Vistex Inc., Nakisa, Inc, ICON-SCM, Prometheus Group and SmartOps. [33] [34]
SAP PartnerEdge SAP products for small businesses and midsize companies are delivered through its global partner network. In 2008, SAP signed SAP Global Service partnership with HCL Technologies, a $6 b technology service provider, headquartered in India.[35] SAP PartnerEdge has also signed with ENFOS, Inc., a software as a service company, to develop their EcoHub Partner Sustainability Solution platform. The SAP PartnerEdge program, SAP's partner program, offers a set of business enablement resources and program benefits to help partners including value added resellers (VARs) and independent software vendors (ISVs) be profitable and successful in implementing, selling, marketing, developing and delivering SAP products to a broad range of customers.[36] Gartner states that SAP PartnerEdge has "set a new standard for innovation in channel development for the small and midsize business application market."[citation needed]
Communities SAP Community Network (SCN) is a community of SAP customers, partners, employees, and influencers – typically in roles such as: developers, consultants, integrators, and business analysts – who gain and share knowledge about ABAP, Java, .NET, SOA, and other technologies, plus analytics and dashboards, business process best practices, cloud, mobile, big data, and a range of other topics via expert blogs, discussion forums, exclusive downloads and code samples, training materials, and a technical library.[37] SAP Community Network has more than 2.5 million members, representing a wide range of roles and lines-of-business, from countries and territories all over the world, in 24 industries. SCN (scn.sap.com [38]) is largely viewed as a best practice in social networking for business.
62
SAP AG
63
Organization Functional units of SAP are split across different organizational units for R&D needs, field activities and customer support. SAP Labs are mainly responsible for product development whereas the field organizations spread across each country are responsible for field activities such Sales, Marketing, Consulting etc. Head office located in SAP AG is responsible for overall management as well as core Engineering activities related to Product Development. SAP customer support, also called Active Global Support (AGS) is a global organization to provide support to SAP customers worldwide.
SAP Labs locations SAP Labs is the research and development organization of the parent company.[39] SAP has its development organization spread across the globe. As of Jan 2011, but not all, labs locations host SAP Research groups.[40] The labs are located in Germany, France, Ireland, Bulgaria and Hungary in Europe; Palo Alto, USA; Bangalore and Gurgaon, India; São Leopoldo, Brazil; Ra'anana and Karmiel, Israel; Montreal and Vancouver, Canada and Shanghai, China. SAP Labs India[41] is the largest development unit in terms of number of employees outside the SAP headquarters located in Walldorf, Germany.[42]
SAP Labs Latin America
Each SAP Lab has prominent area of expertise and focus. SAP Labs in Sophia Antipolis, France for example specializes in development of Java based SAP software products. Whereas, SAP Labs in Palo Alto, California is famous for its focus on innovation and research. SAP opened in June 2009 its new SAP Labs campus in Brazil, representing the first SAP Labs Center in Latin America and the eighth worldwide. The facility is located in São Leopoldo, in the state of Rio Werner Brandt, Stefan Wagner, Luis César Verdi Grande do Sul and employs 520 people.[43] Of particular note are the and Gerhard Oswald celebrating the expansion of building’s structure and interior, which are composed entirely of SAP Labs Latin America environmentally friendly materials. Since these materials were not available in Brazil, constructing the facility did not come cheap for SAP. The project was an effort not only to create a "green house" in Latin America, but also to design offices with a pleasant work atmosphere. SAP Labs Latin America has just received Leadership in Energy and Environmental Design (LEED) Gold certification for the building.
User groups User Groups are independent, not-for-profit organizations of SAP customer companies and partners within the SAP Ecosystem that provide education to their members, influence SAP product releases and direction, exchange best practices, and provide insight into the market needs. Examples of User Groups are the Americas' SAP Users' Group (ASUG),[44] the German speaking SAP User Group (DSAG),[45] the SAP Australian User Group (SAUG),[46] User Community of SAP Customers in the Indian Subcontinent [47] and the SAP UK & Ireland User Group.[48][49] Further SAP User Groups can be found at the List of SAP Users' Groups. In 2007, the SAP User Group Executive Network (SUGEN) has been established to foster the information exchange and best practice sharing among SAP User Groups and to coordinate the collaboration with SAP for strategic topics.[50]
SAP AG SAP also has a large collaboration with developers, partners, customers online via SAP Community Network.
Conferences SAP has two annual conferences: SAPPHIRE and SAP TechEd. SAPPHIRE is SAP's customer-facing event.[51][52] It is generally where SAP has announced major product changes and strategic direction. It is typically held annually in the spring, in both North America and Europe. SAP TechEd is the more technical conference, aimed at SAP's ecosystem of consultants and software development partners.[53][54] SAP TechEd has been held since 1995, and it usually held in four locations around the world every year in the fall. Technical sessions and workshops are held during the conference, as well as Birds of a Feather sessions, and a developer's competition: DemoJam. An associated one-day Unconference event, Community Day, was initiated in 2006 for the SAP Developer Network (SDN), now referred to as SAP Community Network (SCN). In 2008 a Community Day program was added to address the Business Process Expert (BPX) community, which has begun to evolve into new events like InnoJAM, CodeJAM. SAP Inside Tracks are also a popular grassroots level un-conferences locally held in various global locations.
Competitive landscape SAP competitors are primarily in the Enterprise Resource Planning Software industry. In this field, Oracle Corporation is SAP's major competitor. SAP also competes in the Customer Relationship Management, Marketing & Sales Software, Manufacturing, Warehousing & Industrial Software, and Supply Chain Management & Logistics Software sectors.[55] Oracle Corporation filed a case against SAP for malpractice and unfair competition in the California courts on 22 March 2007. In Oracle Corporation v. SAP AG Oracle alleged that a Texas subsidiary, SAP TN (formerly TomorrowNow before purchase by SAP), which provided discount support for legacy Oracle product lines, used the accounts of former Oracle customers to systematically download patches and support documents from Oracle's website and appropriated them for SAP's use.[56][57] Later SAP admitted wrongdoing on smaller scale than Oracle claimed in the lawsuit. SAP has admitted to inappropriate downloads; however the company denies the theft of any intellectual property.[58] SAP claims to grow organically in contrast to its main rival, Oracle, which has spent close to USD 40 billion during 2004–2010 acquiring many competitors. SAP was able to increase its annual profits by 370% since 2002.[59] In a departure from its usual organic growth, SAP announced in October 2007 that it would acquire Business Objects, a market leader in business intelligence software, for $6.8B.[60] SAP provoked controversy and frustration among its users in 2008 by raising the cost of its maintenance contracts. The issue was the subject of intense discussion among user groups.[61] The resulting pressure saw SAP and SUGEN (SAP User Group Executive Network) agree to a major benchmarking exercise to prove the value of the new support pricing policy to customers. In December 2009, SAP delayed its Enterprise Support price rises until agreement had been reached on the benchmarks and KPIs (Key Performance Indicators).[62] In January 2010 SAP did a U-turn (reversed its direction) on Enterprise Support and reintroduced its standard support package for customers, saying the move was “a demonstration of its commitment to customer satisfaction”. The move to reinstate standard support – at 18 percent of annual license fees, “will enable all customers to choose the option that best meets their requirements,” the company said.[63] SAP has also announced that it is freezing prices for existing SAP Enterprise Support contracts at 2009 level.
64
SAP AG
SAP Endorsed Business Solutions (EBS) One of SAP's highest partnership levels lead to a product being designated as an Endorsed Business Solution (EBS). Globally, only 35 companies are SAP Endorsed Business Solution providers. The EBS partnership is an invitation only partnership. These companies are: Arch[64] Aris Global[65] CA Technologies Conformia Development eGain Epic Data ESRI FRS Global Greenlight HCL Implico Invensys Wonderware Financial Management Consulting KSS Fuels LinguaNext Meridium NRX Global Open Text Oracle ORSoft OSIsoft MDUS Oversight Systems Prologa Questra Quorom RIB Software SIGGA softproviding SPSS StreamServe (Open Text) Skipper Electricals (India) Limited TechniData cFP Triple Point Technology Verint Systems Vision Chain Visiprise Werum Software Systems Wipro zetVisions
65
SAP AG
Causes In 2012, SAP is listed as a partner of the (RED) campaign, together with other brands such as Nike Inc., Girl, American Express and Converse. The campaign's mission is to prevent the transmission of the HIV virus from mother to child by 2015 (the campaign's byline is "Fighting for an AIDS Free Generation").[66] SAP is funding a project[67] to assess and educate Autistic Children.[68]
References [1] [2] [3] [4] [5]
http:/ / uk. finance. yahoo. com/ lookup?s=DE0007164600 http:/ / www. boerse-frankfurt. de/ en/ equities/ search/ result?name_isin_wkn=SAP http:/ / www. nyse. com/ about/ listed/ lcddata. html?ticker=sap http:/ / www. sap. com SAP to Expand Customer-Centric Regional Events and Visibility in 2013 Instead of Holding SAPPHIRE NOW from Madrid in 2013 | PR Carbon http:/ / www. prcarbon. com/ press/ sap-to-expand-customer-centric-regional-events-and-visibility-in-2013-instead-of-holding-sapphire-now-from-madrid-in-2013-1003711. html PR Carbon Worldwide News Network and PR Distribution [6] International Directory of Company Histories, Vol. 16. St. James Press, 1997. [8] SAP UK – ICI Success Story (http:/ / www. sap. com/ uk/ about/ success/ casestudies/ ici. epx). Sap.com (1 January 1999). Retrieved on 24 April 2011. [9] Vom Programmierbüro zum globalen Softwareproduzenten: Die Erfolgsfaktoren der SAP von der Gründung bis zum R/3-Boom, 1972 bis 1996 (http:/ / chbeck. metapress. com/ content/ ph64048135784551) [12] SAP penalty in Oracle suit is excessive, analyst says – TomorrowNow, software, sap, oracle, IT industry, Government/Industries, government, applications (http:/ / www. techworld. com. au/ article/ 369233/ sap_penalty_oracle_suit_excessive_analyst_says/ ). Techworld. Retrieved on 24 April 2011. [13] Dignan, Larry. (26 January 2011) SAP earnings dinged by TomorrowNow trial | Business Tech – CNET News (http:/ / news. cnet. com/ 8301-1001_3-20029601-92. html). News.cnet.com. Retrieved on 24 April 2011. [14] Oracle Verdict Against SAP Is Overturned (http:/ / www. nytimes. com/ 2011/ 09/ 02/ technology/ oracle-verdict-against-sap-is-overturned. html). NYTimes.com. Retrieved on 7 November 2011. [23] SAP Global – Investor Relations: SAP Acquires Business Objects in Friendly Takeover (http:/ / www. sap. com/ about/ investor/ bobj/ index. epx). Sap.com. Retrieved on 24 April 2011. [30] SAP – Global & Local Partner Directories: Global Services Partners (http:/ / www. sap. com/ ecosystem/ customers/ directories/ services. epx). Sap.com. Retrieved on 7 July 2011. [31] SAP – Global & Local Partner Directories: Global Software Partners (http:/ / www. sap. com/ ecosystem/ customers/ directories/ software. epx). Sap.com (30 June 2009). Retrieved on 7 July 2011. [32] SAP – Global & Local Partner Directories: Global Technology Partners (http:/ / www. sap. com/ ecosystem/ customers/ directories/ technology. epx). Sap.com. Retrieved on 7 July 2011. [33] Solution Extensions Software from (http:/ / www. sap. com/ solutions/ solutionextensions/ index. epx). SAP. Retrieved on 24 April 2011. [35] SAP – HCL Technologies Announces Global Services Partnership With SAP to Deliver Joint Business Value Through ‘Customer Centric Ecosystem’ (http:/ / www. sap. com/ about/ press/ press. epx?pressid=9075). Sap.com. Retrieved on 7 July 2011. [36] SAP – SAP Solutions for Small Businesses and Midsize Companies: Press Fact Sheet, July 2007 (http:/ / www1. sap. com/ about/ press/ factsheets/ smb. epx). .sap.com. Retrieved on 7 July 2011. [37] SAP – Communities (http:/ / www. sap. com/ communities/ index. epx). Sap.com. Retrieved on 7 July 2011. [38] http:/ / scn. sap. com/ welcome [41] SAP India – SAP Labs | Emerging Solutions from SAP's Research and Development Centre (http:/ / www. sap. com/ india/ company/ saplabs/ index. epx). Sap.com. Retrieved on 24 April 2011. [42] SAP Global – SAP Labs: Key To SAP's Success (http:/ / www. sap. com/ about/ company/ saplabs/ index. epx). Sap.com. Retrieved on 24 April 2011. [43] http:/ / events. news-sap. com/ files/ FINAL_Fact_Sheet_Brazil_SAP_e. pdf [58] SAP admits 'inappropriate' Oracle downloads (http:/ / business. timesonline. co. uk/ tol/ business/ law/ article2019797. ece)- Times Online [59] Konzerne: Einzug ins globale Dorf – Wirtschaft – SPIEGEL ONLINE – Nachrichten (http:/ / www. spiegel. de/ spiegel/ 0,1518,504625,00. html). Spiegel.de (10 September 2007). Retrieved on 7 July 2011. [61] SAP faces user wrath over price hikes (http:/ / www. computerworlduk. com/ management/ it-business/ supplier-relations/ news/ index. cfm?newsid=10632). Computerworlduk.com. Retrieved on 7 July 2011. [62] SAP delays Enterprise Support price rises (http:/ / www. computerworlduk. com/ management/ infrastructure/ applications/ news/ index. cfm?newsId=17810). Computerworlduk.com. Retrieved on 7 July 2011. [63] SAP does U-turn on Enterprise Support (http:/ / www. computerworlduk. com/ management/ infrastructure/ applications/ news/ index. cfm?newsId=18318). Computerworlduk.com. Retrieved on 7 July 2011.
66
SAP AG
67
[67] (http:/ / www. sap-tv. com/ video/ #/ 7522/ ) [68] (http:/ / learn4autism. com/ )
External links • Official website (http://www.sap.com)
List of SAP products This presents a partial list of products of the enterprise software company SAP AG • SAP • • • • •
Customer Relationship Management (CRM) Enterprise Resource Planning (ERP) Product Lifecycle Management (PLM) Supply Chain Management (SCM) Supplier Relationship Management (SRM)
Business Solutions • • • • • • •
SAP Advanced Planner and Optimizer (APO) SAP Analytics SAP Apparel and Footwear Solution (AFS) SAP Business Information Warehouse (BW) SAP Business Intelligence (BI) SAP Catalog Content Management () SAP Convergent Charging (CC)
• • • •
SAP Enterprise Buyer Professional (EBP) SAP Enterprise Learning SAP Portal (EP) SAP Exchange Infrastructure (XI) (From release 7.0 onwards, SAP XI has been renamed as SAP Process Integration (SAP PI)) Governance, Risk and Compliance (GRC) Enterprise Central Component (ECC) SAP HANA (High-performance Analytics Appliance) SAP Human Resource Management Systems (HRMS) SAP Internet Transaction Server (ITS) SAP Incentive and Commission Management (ICM) SAP Knowledge Warehouse (KW) SAP Manufacturing SAP Master Data Management (MDM) SAP Rapid Deployment Solutions (RDS) SAP Service and Asset Management SAP Solutions for mobile business SAP Solution Composer SAP Strategic Enterprise Management (SEM)
• • • • • • • • • • • • • •
• SAP Test Data Migration Server (TDMS) • SAP Training and Event Management (TEM)
List of SAP products • • • •
SAP NetWeaver Application Server (Web AS) SAP xApps SAP Supply Chain Performance Management (SCPM) SAP Sustainability Performance Management (SUPM)
Industry Solutions • • • • • •
SAP for Retail (ISR) SAP for Utilities (ISU) SAP for Public Sector (IS PSCD) SAP for Oil & Gas (IS Oil & Gas) SAP for Telecommunications (IST) SAP for Healthcare (ISH)
Solutions for Small and Midsize Enterprises • SAP Business One (6.2, 6.5, 2004, 2005, 2007, 8.8x) • SAP Business ByDesign[1]
Platforms and frameworks • SAP Enterprise Services Architecture • SAP NetWeaver Platform • SAP NetWeaver Portal (formerly SAP Enterprise Portal) • SAP NetWeaver BW (formerly SAP NetWeaver BI- Since SAP purchased Business Objects the term SAP BI refers to the presentation layer (reporting) tools to avoid confusion between existing SAP BI solutions.) • SAP NetWeaver Visual Composer • SAP Auto-ID Infrastructure • SAP Composite Application Framework • SAP Netweaver Development Infrastructure • SAP Business Connector (deprecated/removed from product range)
Legacy Platforms • SAP R/2 • SAP R/3
Others • • • • •
SAP CCMS, monitoring program SAPgui eCATT SAP Central Process Scheduling, process automation and job scheduler SAP Solution Manager
68
List of SAP products
References [1] http:/ / www. sap. com/ solutions/ sme/ businessbydesign/ overview/ index. epx
SAP Knowledge Warehouse SAP Knowledge Warehouse is SAP AG's product-line offering for knowledge management. It is a software application that facilitates the development of documentation, and the delivery of this documentation to the users of a product or system. It is typically implemented alongside other SAP products (such as SAP R/3) to provide user documentation on the use of those products, although technically it could be used as a stand-alone product to provide documentation for non-SAP systems or products. SAP Knowledge Warehouse provides a basic authoring environment, and relatively robust version control (including check-out/check-in functionality), but it is not really a 'knowledge management' product — it is more a document repository. SAP Knowledge Warehouse allows documentation to be provided to users in three possible ways: 1. Via a web server that provides access to the entire documentation set 2. Via context-sensitive help from another SAP application 3. Through the creation of stand-alone Windows Help files One of the advantages of SAP Knowledge Warehouse is that it ships with a full set of SAP's official standard documentation for all transactions. Companies implementing Knowledge Warehouse can then modify (or 'enhance') only those documents for transactions they have customized, and users will be able to refer to the standard SAP documentation for everything else.
References External links • SAP Knowledge Warehouse (http://scn.sap.com/docs/DOC-8992) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome)
69
SAP HANA
70
SAP HANA SAP HANA
Developer(s)
SAP AG
Stable release
SAP HANA 1.0 SPS5 / November 14, 2012
Development status Active Written in
C, C++
Available in
Multi-lingual
Type
In-memory RDBMS
License
Proprietary
Website
[1] www.saphana.com [2] www.sap.com/hana [3] SAP Community Network
SAP HANA is SAP AG’s implementation of in-memory database technology. There are four components within the software group:[] • SAP HANA DB (or HANA DB) refers to the database technology itself, • SAP HANA Studio refers to the suite of tools provided by SAP for modeling, • SAP HANA Appliance refers to HANA DB as delivered on partner certified hardware (see below) as an appliance. It also includes the modeling tools from HANA Studio as well as replication and data transformation tools to move data into HANA DB,[] • SAP HANA One refers to a deployment of SAP HANA certified for production use on the Amazon Web Services (AWS) cloud.[] (see below) • SAP HANA Application Cloud refers to the cloud based infrastructure for delivery of applications (typically existing SAP applications rewritten to run on HANA). HANA DB takes advantage of the low cost of main memory (RAM), data processing abilities of multi-core processors and the fast data access of solid-state drives relative to traditional hard drives to deliver better performance of analytical and transactional applications. It offers a multi-engine query processing environment which allows it to support both relational data (with both row- and column-oriented physical representations in a hybrid engine) as well as graph and text processing for semi- and unstructured data management within the same system. HANA DB is 100% ACID compliant.[] While HANA has been called variously an acronym for HAsso's New Architecture[] (a reference to SAP founder Hasso Plattner) and High Performance ANalytic Appliance, HANA is a name not an acronym.[4]
SAP HANA
History SAP HANA is the synthesis of three separate products – TREX, P*Time and MaxDB. 1. TREX (Text Retrieval and Extraction) is a search engine. It began in 1996 as a student project at SAP in collaboration with DFKI. TREX became a standard component in SAP NetWeaver in 2000. In-memory attributes were added in 2002 and columnar data store was added in 2003, both as ways to enhance performance. 2. In 2005 SAP acquired Menlo Park based Transact in Memory, Inc.[5] With the acquisition came P*Time [6], an in-memory light-weight online transaction processing (OLTP) RDBMS technology with a row-based data store. 3. MaxDB (formerly SAP DB), a relational database coming from Nixdorf via Software AG (Adabas D) to SAP, was added to TREX and P*Time to provide persistence and more traditional database features like backup. In 2008, SAP CTO Vishal Sikka wrote about HANA "...our teams working together with the Hasso Plattner Institute and Stanford University demonstrated how a new application architecture is possible, one that enables real-time complex analytics and aggregation, up to date with every transaction, in a way never thought possible in financial applications".[] In 2009 a development initiative was launched at SAP to integrate the three technologies above to provide a more comprehensive feature set. The resulting product was named internally and externally as NewDB until the change to HANA DB was finalized in 2011. SAP HANA is not SAP's first in-memory product. Business Warehouse Accelerator (BWA, formerly termed BIA) was designed to accelerate queries by storing BW infocubes in memory. This was followed in 2009 by Explorer Accelerated where SAP combined the Explorer BI tool with BWA as a tool for performing ad-hoc analyses. Other SAP products using in-memory technology were CRM Segmentation, By Design (for analytics) and Enterprise Search (for role based search on structured and unstructured data). All of these were based on the TREX engine. Taking a different approach Advanced Planning and Optimization (APO) used LiveCache for its analytics.
Versions, service packs SAP co-founder (and Chairman of the SAP Supervisory Board as of 2012[7]) Hasso Plattner advocated a ‘versionless’ system for releases. The support packages to date have been:[] • • • •
SP0 – released 20 November 2010; HANA first public release SP1 – released 20 June 2011; HANA general availability (GA); focus is as an operation data mart SP2 – released 27 June 2011; more data mart functions SP3 a.k.a HANA 1.5 – released 7 November 2011); focus is on HANA as the underlying database under Business Warehouse (BW); also named Project Orange • SP4 – Q2, 2012; resolved a variety of stability issues and add new features for BW, according to SAP • SP5 – Feb, 2013; introduces Extended Application Services (REST driver)[][]
Market position Big data Big data refers to datasets that exceed the abilities of commonly used tools. While no formal definition based on size exists, these datasets typically reach terabytes (TB), petabytes (PB), or even exabytes in size. SAP has positioned HANA as its solution to big data challenges at the low end of this scale.[] At launch HANA started with 1TB of RAM supporting up to 5TB of uncompressed data. In late 2011 hardware with 8TB of RAM became available which supported up to 40TB of uncompressed data. SAP owned Sybase IQ with its more mature MapReduce-like functionality has been cited as a potentially better fit for larger datasets.[][8] By May 2012, HANA was able to run on servers with 100TB main memory powered by IBM. Hasso Plattner claimed that the system was big enough to run 8 largest SAP customers.[9]
71
SAP HANA
Other databases marketed by SAP SAP still offers other database products: • • • •
MaxDB Sybase IQ Sybase ASE SQL Anywhere
As a database agnostic company,[10] SAP also resells databases from vendors such as IBM, Oracle and Microsoft to sit under their ERP Business Suite.
Competition Offering its own database solution to support its Business Suite ERP puts SAP in direct competition with some of its largest partners IBM, Microsoft and Oracle. Among the more prominent competing products are: • Appliances • Microsoft Parallel Data warehouse [11] (Microsoft) • Active Enterprise Data Warehouse 5600 [12] (Teradata) • Exadata Database Machine (Oracle) • • • •
Exalytics In-Memory Machine [13] (Oracle) Greenplum Data Computing Appliance [14] (EMC) Netezza Data Warehouse Appliance [15] (IBM) Vertica Analytics Platform [16] (HP)
• In-memory database management systems
Applications Strategic workforce planning SAP Business Objects Strategic Workforce Planning (SWP) was among the first SAP applications to be redesigned to take advantage of HANA's abilities. SWP on HANA is aimed at HR executives who want to simulate workforce models in real-time taking into account turnover, retirement, hiring and other variables.[17]
Smart Meter Analytics In September 2011 SAP released its Smart Meter Analytics tool. This is to help utility companies with large smart meter deployments to manage and use the large amount of data generated by such meters.
Ecosystem Hardware Partners As of 2012[7], seven partners have hardware solutions certified for HANA.[][18] In alphabetic order they are 1. 2. 3. 4. 5.
Cisco[19] Dell[20] Fujitsu[21] Hitachi[22] HP[23]
6. IBM[24] 7. NEC[25]
72
SAP HANA
Developers Community The focal point of the community of developers on SAP HANA platform is SAP HANA Developer Center [26] or "the DevCenter". The DevCenter offers general information, education materials, community forums, plus access to SAP HANA database with free licenses: • 30-days evaluation [27], • free developer license [28] to images hosted in the public cloud (Amazon Web Services) Access to some materials and features may require free registration.
SAP HANA Cloud Options In September 2011 SAP announced its intentions to partner with EMC and VMWare to enable a HANA based application infrastructure cloud.[29] This platform as a service (PaaS) offering includes HANA DB-as-a-service in conjunction with a choice of either a Java-based or ABAP-based stack. Applications built for either stack will have access to HANA DB through a variety of APIs. The Java based approach, codenamed Project River, is based on the NetWeaver 7.3.1 Java application server. The ABAP-based approach is designed more for SAP's existing user base for example in the SAP Business ByDesign suite of business applications including ERP, CRM and supply chain management.[30] On October 16, 2012 SAP announced general availability of two SAP HANA options delivered in the cloud:[] • SAP NetWeaver Cloud (now called SAP HANA Cloud[]) – an open standards-based application service and • SAP HANA One – a deployment of SAP HANA on the Amazon Web Services cloud on an hourly basis. Only 60GB option is available and a 24/7 instance costs $30,572/year,[] though an upfront commitment with Amazon can substantially reduce the hardware portion of the cost.
Technology Architecture At its most basic, the architecture of the HANA database system has the following components.[] • Four Management services • The Connection and Session Management component manages sessions/connections for database clients. Clients can use a variety of languages to communicate with the HANA database. • The Transaction Manager component helps with ACID compliance by coordinating transactions, controlling transactional isolation and tracking running and closed transactions. • The Authorization Manager component handles all security and credentialing (see Security below). • The Metadata Manager component manages all metadata such as table definitions, views, indexes and the definition of SQL Script functions. All metadata, even of different types, is stored in a common catalog. • Three Database Engine components • Calculation Engine component executes on calculation models received from SQL Script (and other) compilers. • Optimizer and Plan Generator component parses and optimizes client requests. • Execution Engine component invokes the various In-Memory Processing Engines and routes intermediate results between consecutive execution steps based on the optimized execution plan. • Three In-Memory Storage Engines • Relational Engine (see Column and row store below) • The Graph Engine (where should this go?) • Text Engine (see Unstructured data below)
73
SAP HANA • Persistency Layer (see Storage below) Column and row store The Relational Engine supports both row- and column-oriented physical representations of relational tables. A system administrator specifies at definition time whether a new table is to be stored in a row- or in a column-oriented format. Row- and column-oriented database tables can be seamlessly combined into one SQL statement, and subsequently, tables can be moved from one representation form to the other. The row store is optimized for concurrent WRITE and READ operations. It keeps all index structures in-memory rather than persisting them on disk. It uses a technology that is optimized for concurrency and scalability in multi-core systems. Typically, Metadata or rarely accessed data is stored in a row-oriented format. Compared to this, the column store is optimized for performance of READ operations. Column-oriented data is stored in a highly compressed format in order to improve the efficiency of memory resource usage and to speed up the data transfer from storage to memory or from memory to CPU. The column store offers significant advantages in terms of data compression enabling access to larger amounts of data in main memory. Typically, user and application data is stored in a column-oriented format to benefit from the high compression rate and from the highly optimized access for selection and aggregation queries. Business Function Library The Business Function Library is a reusable library (similar to stored procedures) for business applications embedded in the HANA calculation engine. This eliminates the need for developing such calculations from scratch. Some of the functions offered are • Annual depreciation • Internal rate of return • Net present value Predictive Analysis Library Similar to the Business Function Library, the Predictive Analysis Library is a collection of compiled analytic functions for predictive analytics. Among the algorithms supported are • • • •
K-means clustering ABC analysis C4.5 algorithm Linear regression
R integration R is a programming language designed for statistical analysis. An open source initiative (under the GNU Project) R is integrated in HANA DB via TCP/IP. HANA uses SQL-SHM, a shared memory-based data exchange to incorporate R’s vertical data structure. HANA also introduces R scripts equivalent to native database operations like join or aggregation.[31] HANA developers can write R scripts in SQL and the types are automatically converted in HANA. R scripts can be invoked with HANA tables as both input and output in the SQLScript. R environments need to be deployed to use R within SQLScript.[][32]
74
SAP HANA
Storage The Persistency Layer is responsible for the durability and atomicity of transactions. It manages data and log volumes on disk and provides interfaces for writing and reading data that are leveraged by all storage engines. This layer is based on the proven persistency layer of MaxDB, SAP’s commercialized disk-centric relational database. The persistency layer ensures that the database is restored to the most recent committed state after a restart and that transactions are either completely executed or completely undone. To achieve this efficiently, it uses a combination of write-ahead logs, shadow paging, and savepoints. Logging and transactions HANA's persistence layer manages logging of all transactions in order to provide standard backup and restore functions. The same persistence layer manages both row and column stores. It offers regular save points and logging of all database transaction since the last save point.[] Concurrency and locking HANA DB uses the multiversion concurrency control (MVCC) principle for concurrency control. This enables long-running read transactions without blocking update transactions. MVCC, in combination with a time-travel mechanism, allows temporal queries inside the Relational Engine.[][]
Data retrieval Unstructured data Since ever more applications require the enrichment of normally structured data with semi-structured, unstructured, or text data, the HANA database provides a text search engine in addition to its classic relational query engine. The Graph Engine supports the efficient representation and processing of data graphs with a flexible typing system. A new dedicated storage structure and a set of optimized base operations are introduced to enable efficient graph operations via the domain-specific WIPE query and manipulation language. The Graph Engine is positioned to optimally support resource planning applications with huge numbers of individual resources and complex mash-up interdependencies. The flexible type system additionally supports the efficient execution of transformation processes, like data cleansing steps in data-warehouse scenarios, to adjust the types of the individual data entries, and it enables the ad-hoc integration of data from different sources. The Text Engine provides text indexing and search abilities, such as exact search for words and phrases, fuzzy search (which tolerates typing errors), and linguistic search (which finds variations of words based on linguistic rules). In addition, search results can be ranked and federated search abilities support searching across multiple tables and views. This functionality is available to applications via specific SQL extensions. For text analyses, a separate Preprocessor Server is used that leverages SAP’s Text Analysis library.[]
75
SAP HANA
Data provisioning Replication services The figure above gives an overview of the alternative methods for data replication from a source system to a HANA database. Each method handles the required data replication differently, and consequently each method has different strengths. It depends on your specific application field and the existing system landscape as to which of the methods best serves your needs. Trigger-Based Data Replication Using SAP Landscape Transformation (LT) Replication Server is based on capturing database changes at a high level of abstraction in the source ERP system. This method of replication benefits from being database-independent, and can also parallelize database changes on multiple tables or by segmenting large table changes. Extract, transform, load (ETL) based data replication uses SAP BusinessObjects Data Services to extract the relevant business data from a source system such as ERP and load it into a HANA database. In addition, the ETL-based method offers options for the integration of third-party data providers. Replication jobs and data flow are configured in Data Services. This permits the use of multiple data sources (including external ones) and data validation.[] Transaction Log-Based Data Replication Using Sybase Replication is based on capturing table changes from low-level database log files. This method is database-dependent. Database changes are propagated for each database transaction, and they are then replayed on the HANA database. This maintains consistency, but at the cost of being unable to use parallelizing to propagate changes.(rewrite)[]
Operations, administration Backup and recovery Immediately after launch, with Service Pack 2, backup and recovery abilities were limited to either Recovery to Last Back-up or Older Data Back-up or Recovery to Last State Before Crash. Additional backup features were implemented in Service Pack 3. These included a Full Automatic or Manual Log Backup option and a Point In-Time Recovery option. New administration features included a new Backup Catalog which records all backup attempts.[33]
Modeling Non-materialized views One implication of HANA’s ability to work with a full database in memory is that computationally intensive KPI calculations can be completed rapidly when compared to disk based databases. Pre-aggregation of data in cubes or storage of results in materialized views is no longer necessary.[34] Information Composer SAP HANA Information Composer is a web based tool which allows users to upload data to a HANA database and manipulate that data by creating Information Views. In the data acquisition portion, data can be uploaded, previewed and cleansed. In the data manipulation portion objects can be selected, combined and placed in Information Views which can be used by SAP BusinessObjects tools.[35]
Security Security and role based permissions are managed by the Authorization Manager in HANA DB. Besides standard database privileges such as create, update or delete HANA DB also supports analytical privileges that represent filters or drill-down limitations on queries as well as access control access privileges to values with certain attributes. HANA DB components invoke the Authorization Manager whenever they need to check on user privileges. The authentication can then be done either by the database itself or be further delegated to an external authentication
76
SAP HANA provider, such as an LDAP directory.[]
Performance and scalability SAP has stated that customers have realized gains as high as 100,000x in improved query performance when compared to disk based database systems.[36] Benchmarks In March 2011, Wintercorp [37] (an independent testing firm specializing in large scale data management) was retained by SAP to audit test specifications and results from test runs. The test used concepts similar to those of the industry standard TPC-H benchmark. The test data had between 600 million and 1.8 billion rows and the test ran five analytical query types and three operational report query types. The combined throughput of analytical and operational report queries ran between 3007 queries/hour and 10,042 queries per hour depending on the volume of data.[38] Scale-out architecture To enable scalability in terms of data volumes and the number of application requests, the HANA database supports scale-up and scale-out. For scale-up, all algorithms and data structures are designed to work on large multi-core architectures especially focusing on cache-aware data structures and code fragments. For scale-out, the HANA database is designed to run on a cluster of individual machines allowing the distribution of data and query processing across multiple nodes.[]
Competitors Competing in-memory databases for online transaction processing and analytics workloads include: • IBM solidDB, a DB2 front-ended high-speed database suite • Oracle In-Memory Database Cache [39], a performance extension of Oracle 11g
References [1] http:/ / www. saphana. com [2] http:/ / www. sap. com/ hana [3] http:/ / www. sdn. sap. com/ irj/ sdn/ in-memory [6] http:/ / www. vldb. org/ conf/ 2004/ IND2P2. PDF [7] http:/ / en. wikipedia. org/ w/ index. php?title=SAP_HANA& action=edit [11] http:/ / www. microsoft. com/ sqlserver/ en/ us/ solutions-technologies/ data-warehousing/ pdw. aspx [12] http:/ / www. teradata. com/ brochures/ Teradata-Active-Enterprise-Data-Warehouse-5600-eb6005/ ?type=BR [13] http:/ / www. oracle. com/ us/ solutions/ ent-performance-bi/ business-intelligence/ exalytics-bi-machine/ overview/ index. html [14] http:/ / www. greenplum. com/ products/ greenplum-dca [15] http:/ / www. netezza. com/ data-warehouse-appliance-products/ index. aspx [16] http:/ / www. vertica. com/ the-analytics-platform/ [26] http:/ / developer. sap. com/ hana [27] http:/ / scn. sap. com/ docs/ DOC-31600 [28] http:/ / scn. sap. com/ docs/ DOC-31722 [37] http:/ / www. wintercorp. com/ [39] http:/ / www. oracle. com/ us/ products/ database/ options/ in-memory-database-cache/ overview/ index. html
77
SAP HANA
External links • Implementing SAP HANA, an End-to-End Perspective (http://sapexperts.wispubs.com/BI/Articles/ Free-Briefing-on-our-HANA-Report-is-Now-Available?id=7646C9600F744E6C9A15A2BF5E2E79E0) • IBM Systems and Services for SAP HANA (http://www.ibm.com/solutions/sap/hana) • Learn about SAP HANA and In-Memory Business Data Management (http://scn.sap.com/community/ hana-in-memory) and visit the SAP HANA Developer Center (http://scn.sap.com/community/ developer-center/hana) on SAP Community Network (SCN) • academy.saphana.com (http://academy.saphana.com) - Provides short video tutorials covering numerous SAP HANA topics • cloud.saphana.com (http://cloud.saphana.com) - Learn about SAP HANA One • New Tools for New Times – Primer on Big Data, Hadoop and 'In-memory' Data Clouds (http:// practicalanalytics.wordpress.com/2011/05/15/new-tools-for-new-times-a-primer-on-big-data/) • SAP MaxDB Overview (http://www.datadisk.co.uk/html_docs/maxdb/overview.htm) • The State of SAP HANA - Four SAP Mentors Share Their Views, JonERP.com (http://www.jonerp.com/ content/view/409/89/) • Building High Performance Analytics Applications on SAP HANA Databases, Wen-Syan Li (http://www.hpcc. shu.edu.cn/Portals/283/zdpaper/SAP_C-HPC2011.pdf) • Column Stores vs. Row Stores: How Different Are They Really? (http://db.csail.mit.edu/projects/cstore/ abadi-sigmod08.pdf) • SAP® HANA for ERP Financials (http://espresso-tutorials.com/?page_id=429)
SAP Business One SAP Business One is an integrated enterprise resource planning (ERP) solution for small and medium-sized businesses, as well as divisions and subsidiaries of larger companies. Produced by SAP, the solution is intended to assist companies by providing support for sales, customer relationships, ,inventory, operations, financials and human resources. The application has over 30,000 customers.[1] The product is sold, implemented, and supported through a global network of local resellers. The rough cost of a B1 license is around 1000 to 2500 euros - varying from country to country.
Solution Overview SAP Business One provides essential business functions out-of-the-box, including:[2] • Financial management: automate financial and accounting processes. Includes support for multiple currencies, budgeting, and bank reconciliation • Warehouse and production management: manage inventory across multiple warehouses, track stock, and manage production orders based on material requirements planning • Customer relationship management: offers sales and opportunity management, and after-sales support • Purchasing: automate procurement – from purchase order through vendor invoice • Mobility: for iOS, SAP releases an app for iPhone or iPad to interact with SAP Business One in the backend • Reporting and Business Intelligence: provides access to data in order to create new reports and customize existing ones with integration with Crystal Reports • Analytics Powered by SAP HANA: using in-memory computing technology / SAP HANA database for instant access to real-time insights[3] SAP Business One is delivered with 41 country localizations, and supports 26 languages.
78
SAP Business One
History In March 2002 SAP purchased TopManage Financial Systems, an Israel-based developer of business applications and branded their system as SAP Business One. TopManage was founded by Reuven Agassi and Gadi Shamia, both took key executive positions at SAP following the acquisition.[citation needed] The acquisition allowed SAP to reach out to the midmarket through its partners and also to gain additional business from the smaller subsidiaries of its enterprise customers.[citation needed] In December 2004, SAP acquired the technology and assets of iLytix Systems AS, a privately held software company based in Oslo, Norway. As a result SAP introduced new reporting and budgeting capabilities in SAP Business One called XL Reporter.[citation needed] In July 2006 SAP acquired Praxis Software Solutions and planned to integrate the company's Web-based CRM and e-commerce capabilities into SAP Business One. Financial terms of the deal weren't disclosed. Minneapolis-based Praxis, a private software company, had previously been a SAP Business One partner.[citation needed] In 2009 SAP sold the Web-based CRM and eCommerce components that comprised the former Praxis Software Solutions. Currently SAP does not offer an eCommerce solution for SAP Business One. This functionality will be provided through the Solution Extension Program.[citation needed] In 2011 SAP partnered with CitiXsys [4] to use their business consolidation solution to extend reach for SAP Business One intercompany integration. The solution enables businesses to manage intercompany transactions for more than one company by automatically replicating corresponding transactions across multiple company databases.[citation needed] At last recorded count, SAP has 88,000 SME's worldwide using B1, which represents 70% of their user base. If we calculate backwards, the remaining 30% of enterprise users will count as 45,000. We are not sure if this includes clients from newly procured companies such as Sybase.[citation needed]
Integration Integration needs are present SAP Business One customers that include small businesses, as well as divisions and subsidiaries of larger corporations. In addition, many of these companies require integration of their existing websites, consolidation of multiple ERP systems and other integration solutions. SAP Business One provides integration technologies for these purposes and more. Integration can be achieved using the SDK component DI API, DI API Server and UIAPI. The DI API provides COM based interface to the business objects & business logic of SAP Business One. The DI API Server is similar to DI API except it provides SOAP based protocol.UIAPI, which based on COM too, focus on SAP Business One User Interface. As those are APIs (Application Programming Interfaces), using them requires software development skills. A simpler way to achieve integration is by using the SAP Business One integration framework. The integration framework enables simple (XML based) definition of integration scenarios. Instead of complex, high-cost implementation, it enables strong low-cost, reliable integration solutions with minimal requirements to resources and skills.
79
SAP Business One
Business One integration framework The Business One integration framework is an integral part SAP Business One. It is used to enable integrated functionality such as dashboards, the Business One mobile solution, or Datev-HR integration. As an option, it allows the customer to implement individual integration to business partners
InterCompany Integration Solution for SAP Business One The InterCompany Integration Solution for SAP Business One is an out-of-the-box solution for customers that are running multiple SAP Business One systems connected together across a network of group companies or subsidiaries. It automatically synchronizes the business data and provides financial consolidation. The solution enables businesses to manage intercompany transactions for more than one company within a group of companies by replicating corresponding transactions across multiple company databases. Automating the replication of such transactions significantly reduces the amount of end user effort and manual rekeying of data to maintain intercompany trading financial statements.
Business One subsidiary integration solution The Business One subsidiary integration solution (also known as integration for SAP NetWeaver/B1iSN) is a special solution for customers, running SAP NetWeaver components in their headquarters and SAP Business One in their subsidiaries. Subsidiary integration runs centrally in the HQ and integrates the subsidiaries into the enterprise business processes. It provides: • • • • •
Standardization and unification of business processes with the subsidiaries Centralized control of subsidiary interactions A single solution for all subsidiaries Rapid integration Subsidiary independence with the ability to leverage parent company processes
Business One integration with mobile devices and content The Business One integration with mobile devices and content enables the connection of SAP Business One to mobile devices (currently only available on the Iphone and Ipad - not on Android Devices) and to non-SAP and SAP information and services. Integration is achieved with the connectivity technology that comes standard with SAP Business One.
Business One SDK The Business One SDK provides components that allow development of addons to the Business One Application. Developers that use the SDK can share their knowledge and experience on the SAP Development Network.[5]
Resellers & Partners In order to become a SAP Business One reseller, a company must be part of SAP's PartnerEdge Program. Partnership starts at the Certified level, and as higher standards of knowledge and support can be shown, it can progress through Associate, Silver and Gold.
Competitors SAP Business One competes with Microsoft Dynamics globally, and with a variety of national packages such as those sold by Sage in many countries around the world.
80
SAP Business One Recently companies such as DynaWare [6] (EOS the evolution of ERP), NetSuite (CRM & ERP), and Salesforce.com (CRM only) have offered similar functionality in web based applications.
External links • SAP Business One [7] • SAP Business One Information site including video training & demonstrations [8]
References [4] [5] [6] [7] [8]
http:/ / www. citixsys. com/ Inside SDN: The Social Network for SAP Professionals (http:/ / sdn. sap. com). http:/ / www. dynaware. com/ http:/ / www. sap. com/ smallbusiness/ http:/ / www. business-one-sap. com/
ERP system selection methodology An ERP system selection methodology is a formal process for selecting an enterprise resource planning (ERP) system. Existing methodologies include:[1] • • • •
SpecIT Independent Vendor Selection Management Kuiper's funnel method Dobrin's 3D decision support tool Clarkson Potomac method
Overview Irrespective of whether the company is a multi-national, multi-million dollar organization or a small company with single digit million turnover, the goal of system selection is to source a system that can provide functionality for all of the business processes; that will get complete user acceptance; management approval and, most importantly, can provide significant return on investment for the shareholders. Since the mid-1970s, when there was widespread introduction of computer packages into leading companies to assist in material requirements planning software companies have striven,[2] and for the most part succeeded, to create packages that assist in all aspects of running a business from manufacturing; supply chain management; human resources; through to financials. This led to the evolution of ERP Systems. Accordingly, a significant number of packages purporting to be ERP systems have entered into the marketplace since 1990.[3] There are packages at the upper end of the market and a vast quantity of other packages that vendors claim to be ERP Systems. There are also packages that claim to be best of breed for certain processes [such as planning] and sold merely as an add-on to an ERP System. The options are many and this, in reality, creates a problem for the company who has to make a decision. Attempting to select an ERP system is further exacerbated by the fact that some systems are geared for discrete manufacturing environment where a distinct amount of items make up a finished product while others are more suited to process industries such as chemical and food processing where the ingredients are not exact and where there might be re-work and byproducts of a process.[4] In the last decade, companies have also become interested in enhanced functionality such as customer relationship management and electronic commerce capability. Given all of the potential solutions, it is not uncommon for companies to choose a system that is not the best fit for the business and this normally leads to a more expensive implementation.[citation needed] Thus "ERP Costs can run as
81
ERP system selection methodology high as two or three percent of revenues".[5] A proper ERP system selection methodology will deliver, within time and budget, an ERP system that is best fit for the business processes and the user in an enterprise.it is used in small scale Enterprises for implement their organization towards the MIS.
Poor system selection Companies seldom use a fully objective selection methodology when choosing an ERP System.[citation needed] Some common mistakes include: Incomplete requirements Because implementation of a new ERP system "requires people to do their job differently" (Wallace and Kremzar[6]), it is very important to understand user requirements, not only for current processes, but also future processes (i.e., before and after the new system is installed). Without detailed user requirements, review of systems for functional best-fit rarely succeeds. The requirements must go into sufficient detail for complex processes, or processes that may be unique to a particular business. Reliance on vendor demos Vendor demonstrations tend to focus on very simplistic processes. A typical demonstration shows an ideal order to cash process where a customer orders a quantity of product that is in stock. The reality in most businesses is that most customers have varying and more complex commercial arrangements, and products are not always in stock. Over-emphasis on system cost According to Finlay and Servant, “The differential in purchase price between packages is unlikely to be the dominant factor".[7] While the cost of an ERP system is significant for a company, other important decision criteria, such as functionality; future proofing; underlying infrastructure [network & database]; and e-commerce capability among others, may be understressed. Selection bias It is not unusual that the decision on which system to purchase is made by one individual or by one department within the company. In these situations, an ERP system that may be excellent at one function but weak at other processes may be imposed on the entire enterprise with serious consequences for the business. Failure to use objective professional services One of the main reasons for failure in system selection is the understandable lack of knowledge within the company.[citation needed] Experienced consultants can provide information on all of the packages that are available in the marketplace; the latest functionality available in the most common packages and, most importantly, can assist the user in deciding whether a specific requirement would provide added value to the user and to the business.[citation needed] However, it is worth noting that the professional help must be provided by objective consultants who have no affiliation with ERP system vendors. "If a consultancy has built up an expertise in the use of a particular package then it is in its interest to recommend that package to its client” [7] Inability to understand offering by ERP vendor "It is estimated that approximately 90% of enterprise system implementations are late or over budget".[8] A plausible explanation for implementations being late and over budget is that the company did not understand the offering by the vendor before the contract was signed.[citation needed] A typical example of this would be the scenario where a vendor may offer 5 days of services for the purpose of data migration. The reality is that there is a huge amount of work required to input data onto a new system. The vendor will import the data into the new system but expects the company to put the data into a file that is easy to import into the system. The company are also expected to extract the data from the old system; clean the data and add new data that is required by the new system. "ERP, to be successful, requires levels of data integrity far higher than most companies have ever achieved – or even considered. Inventory records, bill of materials (BOM), formulas, recipes, routings, and other data need to become highly
82
ERP system selection methodology accurate, complete and properly structured".[6] This typical scenario is one of many issues that cause implementations to be delayed and invariably lead to requests for more resources.
A proper system selection methodology To address the common mistakes that lead to a poor system selection it is important to apply key principles to the process, some of which are listed hereunder: Structured approach The first step in selection of a new system is to adopt a structured approach to the process. The set of practices are presented to all the stakeholders within the enterprise before the system selection process begins. Everyone needs to understand the method of gathering requirements; invitation to tender; how potential vendors will be selected; the format of demonstrations and the process for selecting the vendor. Thus, each stakeholder is aware that the decision will be made on an objective and collective basis and this will always lead to a high level of co-operation within the process. Focused demonstrations Demonstrations by potential vendors must be relevant to the business. However, it is important to understand that there is considerable amount of preparation required by vendors to perform demonstrations that are specific to a business. Therefore it is imperative that vendors are treated equally in requests for demonstrations and it is incumbent on the company [and the objective consultant assisting the company in the selection process] to identify sufficient demonstrations that will allow a proper decision to be made but will also ensure that vendors do not opt out of the selection process due to the extent of preparation required. Objective decision process "Choosing which ERP to use is a complex decision that has significant economic consequences, thus it requires a multi-criterion approach.".[9] There are two key points to note when the major decision makers are agreeing on selection criteria that will be used in evaluating potential vendors. Firstly, the criteria and the scoring system must be agreed in advance prior to viewing any potential systems. The criteria must be wide-ranging and decided upon by as many objective people as possible within and external to the enterprise. In no circumstance should people with affiliations to one or more systems be allowed to advise in this regard. Full involvement by all personnel The decision on the system must be made by all stakeholders within the enterprise. "It requires top management leadership and participation… it involves virtually every department within the company".[6] Representatives of all users should: • • • •
Be involved in the project initiation phase where the decision making process is agreed; Assist in the gathering of requirements; Attend the Vendor Demonstrations; Have a significant participation in the short-listing and final selection of a vendor.
[10]
83
ERP system selection methodology
References [2] Orlicky's material requirements planning by Joseph Orlicky, George W. Plossi 1994 ISBN 0-07-050459-8 [3] Daniel Edmund O'Leary, Enterprise resource planning systems: systems, life cycle, electronic commerce, and risk, Cambridge University Press, 2000. ISBN 0-521-79152-9. [4] Thomas E. Vollman, William L. Berry, D. Clay Whyberk, F. and Robert Jacobs, Manufacturing Planning and Control Systems for Supply Chain Management, 2005, page 96. ISBN 0-07-144033-X. [5] C. Escalle, M. Cotteleer, and R. Austin, Enterprise Resource Planning (ERP), Report No 9-699-020, Harvard Business School, Cambridge, MA, USA, 1999. [6] Thomas F. Wallace and Michael H. Kremzar, ERP: Making it Happen. ISBN 0-471-39201-4. [7] Paul N. Finlay and Terence Servant, Financial Packaging Systems, 1987. ISBN 0-85012-584-7. [8] Martin, M., 'An ERP Strategy', Fortune, 2 February 1998, pages 95–97. [9] Oyku Alanbay, 'ERP Selection using Expert Choice Software', ISAHP 2005, Honolulu, Hawaii, July 8–10, 2005.
External links • ERP Definitions and Solutions (http://www.cio.com/article/40323/ERP_Definition_and_Solutions)
84
SAP ERP
85
SAP ERP ERP Developer(s) SAP AG [1]
Written in
C, C++, ABAP/4
Type
ERP
Website
SAP ERP
[2]
SAP ERP is SAP AG's Enterprise Resource Planning, an integrated software solution that incorporates the key business functions of the organization.
Overview SAP ERP is, in the SAP Business Suite software, the name for the modules comprising the former SAP R/3. It contains the following solutions.[3] SAP ERP Financials: • • • • • • •
Accounts Payable Accounts Receivable Accounting and Financial reporting Risk management Regulatory Compliance Cash Flow Monitoring Travel Management
SAP ERP Human Capital Management: • • • • • • • •
End-user Maintenance[4] HR and Payroll HR Process Management[4] HR Reporting Labor Force Analysis[4] Placement[4] Recruitment and Training Talent Management[4]
SAP ERP Operations: • • • •
Procurement and logistics Product development and manufacturing Sales and service Operations analytics
SAP ERP
Development SAP R/3 through version 4.6c consisted of various applications on top of SAP Basis, SAP's set of middleware programs and tools. When SAP R/3 Enterprise was launched in 2002, all applications were built on top of the SAP Web Application Server. Extension sets were used to deliver new features and keep the core as stable as possible. The Web Application Server contained all the capabilities of SAP Basis. As a result of marketing changes and changes in the industry, other versions of SAP have been released that address these changes. The first edition of mySAP ERP was launched in 2003 and bundled previously separate products, including SAP R/3 Enterprise, SAP Strategic Enterprise Management (SEM) and extension sets. The SAP Web Application Server was wrapped into NetWeaver, which was also introduced in 2003. A complete architecture change took place with the introduction of mySAP ERP edition 2004. R/3 Enterprise was replaced with the introduction of ERP Central Component (SAP ECC). The SAP Business Warehouse, SAP Strategic Enterprise Management and Internet Transaction Server were also merged into SAP ECC, allowing users to run them under one instance. Architectural changes were also made to support an enterprise services architecture to transition customers to a services-oriented architecture. SAP HANA which is a combination of In-memory software and hardware can improve data processing at extremely high speeds.
Implementation SAP ERP consists of several modules, including utilities for marketing and sales, field service, product design and development, production and inventory control, human resources, finance and accounting. SAP ERP collects and combines data from the separate modules to provide the company or organization with enterprise resource planning. Although there can be major benefits for customers of SAP ERP, the implementation and training costs are expensive. Many companies experience problems when implementing SAP ERP software, such as failing to specify their operation objectives, absence of a strong commitment or positive approach to change, failing to deal with organizational differences, failing to plan the change to SAP ERP properly, inadequate testing. All these factors can mean the difference between having a successful implementation of SAP ERP or an unsuccessful one. If SAP ERP is implemented correctly an enterprise can go from its old calculations system to a fully integrated software package. Potential benefits include efficient business process, inventory reduction, and lead time reduction. An article in the IEEE Transaction on Engineering Management journal reports an industrial case in which the senior management successfully dealt with a troubled SAP R/3 implementation in an international fast moving consumer goods (FMCG) company during 2001 and 2002.[]
Deployment and maintenance costs SAP ERP systems effectively implemented can have cost benefits. Integration is the key in this process. "Generally, a company's level of data integration is highest when the company uses one vendor to supply all of its modules." An out-of-box software package has some level of integration but it depends on the expertise of the company to install the system and how the package allows the users to integrate the different modules.[] It is estimated that "for a Fortune 500 company, software, hardware, and consulting costs can easily exceed $100 million (around $50 million to $500 million). Large companies can also spend $50 million to $100 million on upgrades. Full implementation of all modules can take years," which also adds to the end price. Midsized companies (fewer than 1,000 employees) are more likely to spend around $10 million to $20 million at most, and small companies are not likely to have the need for a fully integrated SAP ERP system unless they have the likelihood of becoming midsized and then the same data applies as would a midsized company.[] Independent studies have shown that deployment and maintenance costs of a SAP solution can greatly vary depending on the organization. For example, some point out that because of the rigid model imposed by SAP tools, a lot of customization code to adapt to the business process may have to be developed and maintained.[5] Some others pointed out that a return on
86
SAP ERP investment could only be obtained when there was both a sufficient number of users and sufficient frequency of use.[6][7] Deploying SAP itself can also involve a lot of time and resources.[8]
Security Communications SAP systems - including client systems - communicate with each other using SAP-specific protocols (e.g., RFC and DIAG) and the http and https protocols. These systems do not have encrypted communications out of the box; however, SAP does provide a free toolkit for server-to-server communications.[9] With the recent acquisition of relevant parts of SECUDE,[10] SAP can now provide cryptography libraries with SAP ERP for Secure Network Communications and Secure Socket Layer.
ERP advantages and disadvantages Advantages • Allows easier global integration (barriers of currency exchange rates, language, and culture can be bridged automatically) • Updates only need to be done once to be implemented company-wide • • • •
Provides real-time information, reducing the possibility of redundancy errors May create a more efficient work environment for employees[] Vendors have past knowledge and expertise on how to best build and implement a system User interface is completely customizable allowing end users to dictate the operational structure of the product
Disadvantages • Locked into relationship by contract and manageability with vendor - a contract can hold a company to the vendor until it expires and it can be unprofitable to switch vendors if switching costs are too high • Inflexibility - vendor packages may not fit a company's business model well and customization can be expensive • Return on Investment may take too long to be profitable • Implementations have a risk of project failure[]
References [2] http:/ / www. sap. com/ solutions/ business-suite/ erp/ index. epx [4] SAP ERP (http:/ / iqusion. com/ en/ productsandservice/ business/ erp/ 365-sap-erp) [9] SAP Cryptographic Library (http:/ / help. sap. com/ saphelp_nw70/ Helpdata/ en/ ca/ cbca6b937ea344a9a3be78a128a803/ content. htm) [10] SAP to Acquire Software Security Products and Assets from SECUDE (http:/ / www. sap. com/ press. epx?pressid=14606)
- Gargeya, VB 2005, ‘Success and failure factors of adopting SAP in ERP system implementation’, Business Process Management Journal, Vol.11, No.5, pp501–516, Retrieved 21/04/2010. - In White Paper Review, Industry Week OCT 2009, ‘ERP Best Practices: The SaaS Difference, Plex Systems, Retrieved 21/04/2012. - Malhorta, A & Temponi, C 2010, ‘Critical decisions for ERP integration: Small business issues’, International Journal of Information Management, Vol. 30, Issue No.1, Pages 28–37, 21/04/2010, Science Direct.
87
SAP ERP
External links • Official SAP ERP site (http://www.sap.com/solutions/business-suite/erp/index.epx) on SAP.com • Official SAP HCM site (http://www.sap.com/lines-of-business/hr/index.epx) on SAP.com • SAP ERP (http://scn.sap.com/community/erp) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome) • SAP-ERP.com (http://www.sap-erp.com) • SAP Admin Forum.com (http://www.forum.sapadmin.co.in) discussions, documents
SAP R/3 SAP R/3 is the former name of the enterprise resource planning software produced by SAP AG. It is an enterprise-wide information system designed to coordinate all the resources, information, and activities needed to complete business processes such as order fulfillment or billing.[1]
History of SAP R/3 The first version of SAP's flagship enterprise software was a financial Accounting system named R/1 called as YSR. This was replaced by R/2 at the end of the 1970s. SAP R/2 was in a mainframe based business application software suite that was very successful in the 1980s and early 1990s. It was particularly popular with large multinational European companies who required soft-real-time business applications, with multi-currency and multi-language capabilities built in. With the advent of distributed client–server computing SAP AG brought out a client–server version of the software called SAP R/3 (The "R" was for "Real-time data processing" and 3 was for 3-tier). This new architecture is compatible with multiple platforms and operating systems, such as Microsoft Windows or UNIX. This opened up SAP to a whole new customer base. SAP R/3 was officially launched on 6 July 1992. It was renamed SAP ERP and later again renamed ECC (ERP Central Component). SAP came to dominate the large business applications market over the next 10 years. SAP ECC 5.0 ERP is the successor of SAP R/3 4.70. The newest version of the suite is SAP ECC 7.0.
Releases • • • • • • • • • • • • •
SAP R/3 Release 1.0A Release Date 6 July 1992 SAP R/3 Release 2.0 / 2.1 Released 1993 SAP R/3 Release 3.0 / 3.1 Released 1995 SAP R/3 Release 4.0B Release Date June 1998 SAP R/3 Release 4.5B Release Date March 1999 SAP R/3 Release 4.6A Release Date 1999 SAP R/3 Release 4.6B Release Date Dec 1999 SAP R/3 Release 4.6C Release Date April 2001 SAP R/3 Enterprise Release 4.70 Release Date March- Dec 2003[2] SAP R/3 Enterprise Edition 4.7 SAP R/3 Enterprise Central Component 5.0 SAP R/3 Enterprise Central Component 6.0 SAP ERP 6.0 - Enhancement Packages (1,2,3,4,5,6)
88
SAP R/3
Organization SAP R/3 was arranged into distinct functional modules, covering the typical functions in place in an organization. The most widely used modules were Financial s and Controlling (FICO), Human Resources (HR), Materials Management (MM), Sales & Distribution (SD), and Production Planning (PP)[citation needed]. Each module handled specific business tasks on its own, but was linked to the others where applicable. For instance, an invoice from the billing transaction of Sales & Distribution would pass through to accounting, where it will appear in accounts receivable and cost of goods sold. SAP typically focused on best practice methodologies for driving its software processes, but more recently expanded into vertical markets. In these situations, SAP produced specialized modules (referred to as IS or Industry Specific) geared toward a particular market segment, such as utilities or retail.
Technology SAP based the architecture of R/3 on a three-tier client/server structure: 1. Presentation Layer (GUI) 2. Application Layer 3. Database Layer SAP allows the IT supported processing of a multitude of tasks, occurring in a typical company or bank. SAP ERP is differing from R/3 mainly because it is based on SAP NetWeaver: core components can be implemented in ABAP and in Java and new functional areas are mostly no longer created as part of the previous ERP system, with closely interconnected constituents, but as self-contained components or even systems.
Application Server An application server is a collection of executables that collectively interpret the ABAP/4 (Advanced Business Application Programming / 4th Generation) programs and manage the input and output for them. When an application server is started, these executables all start at the same time. When an application server is stopped, they all shut down together. The number of processes that start up when you bring up the application server is defined in a single configuration file called the application server profile. Each application server has a profile that specifies its characteristics when it starts up and while it is running. For example, an application server profile specifies: • Number of processes and their types • Amount of memory each process may use • Length of time a user is inactive before being automatically logged off. The Application layer consists of one or more application servers and a message server. Each application server contains a set of services used to run the R/3 system. Not practical, only one application server is needed to run an R/3 system. But in practice, the services are distributed across more than one application server. This means that not all application servers will provide the full range of services. The message server is responsible for communication between the application servers. It passes requests from one application server to another within the system. It also contains information about application server groups and the current load balancing within them. It uses this information to choose an appropriate server when a user logs onto the system. The application server exists to interpret ABAP/4 programs, and they only run there. If an ABAP/4 program requests information from the database, the application server will send the request to the database server.
89
SAP R/3
Security Server-to-server communications can be encrypted with the SAP cryptographic library.[3] With the recent acquisition of relevant parts of SECUDE,[4] SAP can now provide cryptography libraries with SAP R/3 for Secure Network Communications and Secure Socket.
References [1] Esteves, J., and Pastor, J., Enterprise Resource Planning Systems Research: An Annotated Bibliography, Communications of AIS, 7(8) pp. 2-54. [3] SAP Cryptographic Library (SAPCRYPTOLIB (http:/ / help. sap. com/ saphelp_sm32/ helpdata/ en/ 01/ 58913a594e3b12e10000000a114084/ content. htm) [4] SAP to Acquire Software Security Products and Assets from SECUDE (http:/ / www. sap. com/ press. epx?pressid=14606)
External links • SAP Modules (http://www.sap-erp.com/) • Total SAP Modules (http://www.wiki.sapamericas.com/total-number-of-sap-modules/)
SAP for Retail “SAP for Retail” is an industry-specific application software from the software vendor SAP AG and is focused on the global Retailing industry. SAP for Retail is a set of software solutions that supports demand management, merchandise management and planning, supply chain, store operations, and base financials and Human Resource Capital functions. The solutions support most retailing processes, including: • Contracting: Contracting makes the basic procurement decisions and updates the relevant base data. • Demand Management/ Forecasting: group and analyze the demand from customers and replenish the stock accordingly. • Purchasing: Purchasing involves the placing of orders by determining supplier, article, quantity and time. It also includes subtasks as limit calculation, requirements calculation, purchase order quantity calculation, stock allocation, order transfer and order monitoring. • Goods receipt: Goods receipt is the quantity-related logistical equivalent to the purchasing order. • Invoice verification: The value equivalent to the goods receipt are the invoice arrival and the invoice verification with the subtasks: invoice acquisition, invoice checking, invoice release, subsequent invoice processing and processing of subsequent conditions. • Accounts payable: The major task of creditor accounting is handling payments, i.e. the payment for the open items resulting from the supplier’s invoice. • Marketing: Here operational marketing is meant like updating of customer master data, the assortment and merchandise policies (in particular: assortment planning, sales planning and turn-over planning, listing and delisting of articles). • Pricing: With pricing, the activities for business goals, product costs, competitive information, and business rules can be performed. • Sales: Sales includes the subtasks of customer query processing, customer offer processing, creation of order records, order processing, possibly customer complaints processing, and sales representative support. • Goods issue: Tasks of the goods issue involve the route planning, planning of the order picking, the actual order picking, the goods issue acquisition and adjusting of the inventory.
90
SAP for Retail • Billing: evaluation of the customer delivery note, the various forms of invoicing the customer and the calculation of subsequent reimbursements, together with the production of any required credit and debit notes. • Accounts receivable: The central task here is the administration of the debtor accounts and the monitoring of the payment. • Warehousing: warehousing performs the bridging function between procurement side and the sales or demand side. This involves the subtasks updating of the warehouse master data, stock transfers and posting transfers, cross-docking, the stocktaking in the warehouse, and the warehouse control. • Point-of-Sale: Place where the customer stops to purchase goods with the following tasks: process sales transactions and returns, process exchanges, with ability to declare sales transactions null and void and manage pending sales. • Business analytics: Examples include: Customer Analytics (e.g. customer frequency, loyalty analysis), Store Operations Analytics (e.g. promotional sales, actual labor versus scheduled), Merchandising Analytics (e.g. Sales by item, top selling items, vendor scorecarding, inventory analysis), Supply Chain Analytics (e.g. fulfillment rates, deliveries, stock overview)
History In 1994, SAP acquired Dacos Software GmbH, which was located in Saarbrücken, Germany and renamed it to “SAP Retail Solutions”. Before the acquisition SAP used the R/3 components MM (Materials Management), SD (Sales and Distribution) and Warehousing also for the retail industry. One of the most visible changes after the acquisition was a top-down menu, where users could add additional functionality, data and additional organizational units to the standard components. Over time the materialmaster became the article master, more functionality and applications were added and more acquisitions completed the offer. In 1999 SAP acquired Campbell Software Inc. in Chicago (US). Campbell itself was founded in 1989 and developed a software for workforce management and personal time recording. The workforce management solution was called Staffworks and Campbell Time and Attendance. At the time of the acquisition Campbell Software had approximately 71 retail customers. SAP Campbell was created as a legal subsidiary after the acquisition. For those two years SAP Campbell continued to support their install base customers and also to attempt to expand the original install base. Around 2001–2002 SAP Campbell was integrated back into SAP. The original install base has dwindled to a few if any original customers and solution investment was reduced and eliminated over time. Thanks to the integration into SAP Human Capital Management (SAP HCM), it is possible to manage any time account reflecting the complex overtime and bonus rules defined in the various overall labor or company agreements.[1] In 2006 SAP acquired Khimetrics (Scottsdale, AZ) and Triversity (Canada). Khimetrics developed demand management software, which supports retailers in the synchronization of their strategy and customer demands. The Khimetrics solutions included customer demand modeling and forecasting, base price optimization, promotion planning and optimization, and demand intelligence analytics. In 2006 Khimetrics had approximately 21 customers and slightly less than half of those customers were retail. Triversity developed a point-of-sale solution, which not only sums up the articles and prices, but allows also store inventory maintenance, customer relationship management and services for store and multi-channel processes.[2] In 2009 SAP acquired SAF AG (Simulation, Analysis and Forecasting) -- they develop automatic order and forecasting software for retailers. SAF AG was founded in 1996 and is located in Tägerwillen (Switzerland) with subsidiaries in the U.S. and Slowakia. Before SAP acquired SAF, both companies had a long history of cooperation using SAF's order and forecasting software as part of SAP Forecasting and Replenishment solution.[3]
91
SAP for Retail
References [1] http:/ / www. inc. com/ inc5000/ 2007/ company-profile. html?id=1997204 [2] http:/ / www1. sap. com/ global/ templates/ press. epx?pressid=4959 [3] http:/ / www1. sap. com/ global/ templates/ press. epx?pressid=11785
External links • Official Webpage of SAP AG for the Retail Industry (http://www.sap.com/industries/retail/index.epx) • SAP for Retail (http://scn.sap.com/community/retail) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome)
SAP IS-U SAP IS-U is SAP's Industry Specific Solution for Utilities Industry. It is also referred to as SAP IS-U/CCS (Customer Care System) SAP Utilities (SAP IS-U) is a sales and information system that supports utility and waste disposal companies.
Further reading • Jörg Frederick, Tobias Zierau (2011). SAP for Utilities: Funktionen, Prozesse und Customizing der Lösung für Energieversorger. Bonn, Germany: Galileo Press. ISBN 978-3-8362-1690-6.
External links • Official SAP for Utilities User Group [1] on SAP Community Network (Implementation, Forum, Solution Map, Wiki) • SAP IS-U Community [2] SAP IS-U Community • SAP Library - SAP Utilities [3] SAP Library - SAP Utilities • [4] SAP Utilities • [5] SAP E-Books • The biggest SAP forum [6]
References [1] [2] [3] [4] [5] [6]
http:/ / scn. sap. com/ community/ utilities http:/ / www. sap-isu. net http:/ / help. sap. com/ saphelp_utilities472/ helpdata/ en/ c6/ 4dce68eafc11d18a030000e829fbbd/ frameset. htm http:/ / help. sap. com/ saphelp_erp60/ helpdata/ en/ c6/ 4dc54beafc11d18a030000e829fbbd/ content. htm http:/ / www. sapebooks. com/ info/ is-utility/ sap-is-utility/ http:/ / www. qnasap. com/
92
SAP Logon Ticket
SAP Logon Ticket SAP Logon Tickets represent user credentials in SAP systems. When enabled, users can access multiple SAP applications and services through SAPgui and web browsers without further username and password inputs from the user. SAP Logon Tickets can also be a vehicle for enabling single sign-on across SAP boundaries; in some cases, logon tickets can be used to authenticate into 3rd party applications such as Microsoft-based web applications.[1]
How Does It Work 1. User opens SAP 2. User logs on to SAP 3. SAP enterprise portal server issues (against user persistence specified in the portal user management engine (UME)) an SAP Logon Ticket to the user 4. SAP Logon Ticket is stored in the user's browser as a non-persistent HTTP cookie 5. User gains access to multiple SAP applications and services
Composition • • • • •
User ID Validity date(s) Issuing system Digital signature Authentication method
Notable Properties Below is a short list of important properties for SAP Logon Tickets.[2] • login.ticket_client - a three-character numeric string used to indicate the client that is written into the SAP logon ticket • login.ticket_lifetime - indicates the validity period of the ticket in terms of hours and minutes (i.e., HH:MM) • login.ticket_portalid - yes/no/auto for writing the portal ID into the ticket • ume.login.mdc.hosts - allows the enterprise portal to look for logon tickets from servers outside the portal domain • ume.logon.httponlycooki - true/false for security against malicious client-side script code such as JavaScript • ume.logon.security.enforce_secure_cookie - enables SSL communication • ume.logon.security.relax_domain.level - determines which domains the SAP logon ticket is valid
Single Sign-On SAP Logon Tickets can be used for single sign-on through the SAP Enterprise Portal. SAP provides a Web Server Filter that can be used for an authentication via http header variable and a Dynamic Link Library for verifying SSO Tickets in 3rd party software which can be used to provide native support for SAP Logon Tickets in applications written in C or JAVA.
Web Server Filter The filter is available from SAP Enterprise Portal 5.0 onwards. Leveraging the filter for single sign-on requires that the web-based application support http header variable authentication. The filter authenticates the logon ticket by using the enterprise portal's digital certificate. After authentication, the user's name, from the logon ticket, is extracted and is written into the http header. Additional configuration to the http header variable can done in the
93
SAP Logon Ticket filter's configuration file (i.e., remote_user_alias).
Integration with Identity & Access Management Platforms • Tivoli Access Manager has developed an authentication service compatible with SAP Logon Tickets[3] • Sun ONE Identity has developed a solution where companies can use the SAP Internet Transaction Server (ITS 2.0) and SAP Pluggable Authentication Service (PAS) for integration with SAP for single sign-on. This method uses logon tickets for single sign-on and the SAPCRYPTOLIB (SAP encryption library) for SAP server-to-server encryption. Sun's solution utilizes the dynamic libraries (DLL) external authentication method.[4] • IBM Lotus Domino can be used as a technical ticket verifier component [5] Availability • Windows, Microsoft Internet Information Server • Apache, iPlanet Web Server
Dynamic Link Library SAP provides Java and C sample files that can provide some hints how the library can be implemented in the source code of a high level programming language such as Visual Basic, C or JAVA.
Single Sign-On to Microsoft Web Applications Microsoft web based applications usually only support the authentication methods basic authentication or windows integrated authentication (Kerberos) provided by the Internet Information Server. However, Kerberos does not work well over the internet due to the typical configuration of client-side firewalls. SSO to Microsoft backend systems in extranet scenarios is limited to the user id password mechanism. Based on the new feature called protocol transition using constrained delegation SAP developed the SSO22KerbMap Module. This new ISAPI Filter requests a constrained Kerberos ticket for users identified by valid SAP Logon Ticket that can be used for SSO to Microsoft web based applications in the back end.[6]
Single Sign-On to Non-SAP Java Environments It is possible to use SAP Logon Tickets in a non-SAP Java environment with minor custom coding.[7][8]
Integration into SAP Systems ABAP Logon tickets allows for single sign-on into ABAP application servers.[9] However, there are prerequisites: • Usernames need to be the same for all SAP system that the user wants single sign-on for. Passwords can be different. • Web browsers need to be configured to accept cookies. • Any web servers for ABAP servers need to be placed on the same DNS • The issuing server must be able to digitally sign logon tickets (i.e., public-key and private-key are required). • Systems that accept logon tickets must have access to the issuing server's public-key certificate.
94
SAP Logon Ticket
J2EE Java servers allows for single sign-on into Java application servers.[10] However, there are prerequisites: • Usernames need to be the same for all SAP system that the user wants single sign-on for. Passwords can be different. • Web browsers need to be configured to accept cookies. • Any web servers for ABAP servers need to be placed on the same DNS • Clocks for accepting tickets are synchronize with the issuing server's clock. • The issuing server must be able to digitally sign logon tickets (i.e., public-key and private-key are required). • Systems that accept logon tickets must have access to the issuing server's public-key certificate.
Security Features • • • •
Digitally signed by the SAP portal server Uses asymmetric cryptography to establish unidirectional trust relationship between users and SAP systems Protected in transport via SSL Validity period that can be configured in the security settings of the SAP Enterprise Portal
Security Challenges • SAP Logon Tickets do not utilize Secure Network Communications (SNC) • Typical security-related issues around cookies stored in a web browser. Examples include:[11] • Copying the SAP Logon Ticket via network traffic sniffing or social engineering and storing it on another computer for access to the SAP Enterprise Portal
Alternatives to SAP Logon Tickets • Account aggregation via SAP NetWeaver • Utilize Secure Network Communications-based single sign-on technology from independent software security providers
Secure Network Communications-Based Single Sign-On Account Aggregation The Enterprise Portal Server maps user information, i.e., user id and password, to allow users to access external systems. This approach requires that to maintain changes of username and/or password from one backend application to the portal. This approach is not viable to web-based backend systems because past security updates from Microsoft no longer support handling of usernames and passwords in HTTP, with or without Secure Sockets Layer (SSL), and HTTPS URLs in Internet Explorer The usage of account aggregation has several drawbacks. First of all it requires that a SAP portal user has to maintain a user id and password for each application that is using account aggregation. If the password in one backend application changes the SAP portal user has to maintain the stored credentials too. Though account aggregation can be used as an option where no other solution might work it causes a significant administrative overhead. Using account aggregation to access a web based backend system that is configured to use basic authentication results in sending a URL that contains user name and password. A security update from Microsoft that has been published recently removes support for handling user names and passwords in HTTP and HTTP with Secure Sockets Layer (SSL) or HTTPS URLs in Microsoft Internet Explorer. The following URL syntax is no longer supported in Internet Explorer if this security patch has been applied.
95
SAP Logon Ticket
References [1] Using SAP Logon Tickets for Single Sign on to Microsoft based web applications (http:/ / www. sdn. sap. com/ irj/ scn/ go/ portal/ prtroot/ docs/ library/ uuid/ 47d0cd90-0201-0010-4c86-f81b1c812e50?QuickLink=index& overridelayout=true) [2] SAP Logon Ticket Properties (http:/ / help. sap. com/ saphelp_nw04/ helpdata/ en/ 5e/ 473d4124b08739e10000000a1550b0/ content. htm) [3] Authenticating a SAP login ticket in Tivoli Access Manager e-business WebSEAL (http:/ / www. ibm. com/ developerworks/ tivoli/ library/ t-authsaptam/ index. html) [4] Single Sign-On Solution for SAP Internet Transaction Server 2.0 (http:/ / docs. sun. com/ source/ 816-6772-10/ sapits. html) [5] Ticket Verifier Technical Components (http:/ / help. sap. com/ erp2005_ehp_04/ helpdata/ EN/ 22/ bf642724ca20418924e57c51412191/ frameset. htm) [6] Using SAP Logon Tickets for Single Sign-On (http:/ / www. sdn. sap. com/ irj/ scn/ go/ portal/ prtroot/ docs/ library/ uuid/ 47d0cd90-0201-0010-4c86-f81b1c812e50?QuickLink=index& overridelayout=true) [7] Validating SAP Logon Tickets with Java (http:/ / trick77. com/ 2008/ 02/ 07/ validating-sap-logon-tickets-with-java/ ) [8] MySAP SSO Support (http:/ / www. zope. org/ Members/ Dirk. Datzert/ MySapSsoSupport/ ) [9] Using Logon Tickets (http:/ / help. sap. com/ erp2005_ehp_04/ helpdata/ EN/ f8/ 18da3a82f9cc38e10000000a114084/ frameset. htm) [10] Using Logon Tickets for Single Sign-On (http:/ / help. sap. com/ erp2005_ehp_04/ helpdata/ EN/ 53/ 695b3ebd564644e10000000a114084/ frameset. htm) [11] W3 Security FAQ on Browser Cookies (http:/ / www. w3. org/ Security/ Faq/ wwwsf2. html)
External links • Configuring SAP Logon Tickets (http://help.sap.com/saphelp_nw04s/helpdata/en/5c/ b7d53ae8ab9248e10000000a114084/frameset.htm) • Sample Login Module Stacks for Using Logon Tickets (http://help.sap.com/erp2005_ehp_04/helpdata/EN/ 04/120b40c6c01961e10000000a155106/frameset.htm) • Testing the Use of Logon Tickets (http://help.sap.com/erp2005_ehp_04/helpdata/EN/b4/ cb8846dd0e7c45833e10c807328453/frameset.htm) • Configuring Component Systems for SSO with Logon Tickets (http://help.sap.com/erp2005_ehp_04/helpdata/ EN/1c/22afe3b26011d5993800508b6b8b11/frameset.htm) • Administration When Using Logon Tickets (http://help.sap.com/erp2005_ehp_04/helpdata/EN/47/ fd6f9deca159e8e10000000a42193/frameset.htm)
96
SAP NetWeaver
SAP NetWeaver SAP NetWeaver is SAP's integrated technology computing platform and is the technical foundation for many SAP applications since the SAP Business Suite. SAP NetWeaver is marketed as a service-oriented application and integration platform. SAP NetWeaver provides the development and runtime environment for SAP applications and can be used for custom development and integration with other applications and systems. SAP NetWeaver is built using primarily the ABAP programming language, but also uses C (programming language), C++, and Java EE. It also employs open standards and industry de facto standards and can be extended with, and interoperate with, technologies such as Microsoft .NET, Java EE, and IBM WebSphere. SAP NetWeaver's release is considered as a strategic move by SAP for driving enterprises to run their business on a single, integrated platform that includes both applications and technology. Industry analysts refer to this type of integrated platform offering as an "applistructure" (applications + infrastructure). According to SAP, this approach is driven by industry's need to lower IT costs through an enterprise architecture that is at once (1) more flexible; (2) better integrated with applications; (3) built on open standards to ensure future interoperability and broad integration; and, (4) provided by a vendor that is financially viable for the long term.[1] SAP is fostering relationships with system integrators and independent software vendors, many of the latter becoming "Powered by SAP NetWeaver". SAP NetWeaver is part of SAP's plan to transition to a more open, service-oriented architecture and to deliver the technical foundation of its applications on a single, integrated platform and common release cycle.
History SAP announced first NetWeaver release, named Netweaver 2004, in January 2003, and it was made available on March 31, 2004...[2][3] NetWeaver 7.0, aka 2004s, was made available on October 24, 2005.[4]
Composition NetWeaver is essentially the integrated stack of SAP technology products. The SAP Web Application Server (sometimes referred to as WebAS) is the runtime environment for the SAP applications—all of the mySAP Business Suite solutions (SRM, CRM, SCM, PLM, ERP) run on SAP WebAS.
Products The core products that make up SAP NetWeaver include: • • • • • • • •
SAP NetWeaver Application Server SAP NetWeaver Business Intelligence SAP NetWeaver Composition Environment (CE) SAP NetWeaver Enterprise Portal (EP) SAP NetWeaver Identity Management (IdM) SAP NetWeaver Master Data Management (MDM) SAP NetWeaver Mobile SAP NetWeaver Process Integration (PI)
SAP has also teamed with hardware vendors like HP, IBM, Fujitsu, and Oracle(Previously known as Sun) to deliver appliances (i.e., hardware + software) to simplify and enhance the deployment of NetWeaver components. Examples of these appliances include: • BW Accelerator
97
SAP NetWeaver • Enterprise Search
Development Tools • ABAP Workbench (SE80) • SAP NetWeaver Developer Studio (NWDS) based on Eclipse for most of the Java part of the technology (Web Dynpro for Java, JEE, Java Dictionary, Portal Applications etc.) • SAP Netweaver Development Infrastructure (NWDI) • Visual Composer
Features • • • • • •
SOAP and web services Interoperability with Java EE Interoperability with .NET (Microsoft) Integration of Business Intelligence xApps Duet [5]
Specifically, ERP is being extended by Business Process Management Systems (BPMs) and, as BPMs takes hold as the pre-dominant technical platform for new applications, expect to see radical changes to ERP architecture in the years ahead. The technology has been applied to a wide range of industries and applications. SAP's Netweaver platform is still backwards-compatible with ABAP, SAP's custom development language.
References [2] [3] [4] [5]
press release on 2004 release of NetWeaver (http:/ / www36. sap. com/ about/ newsroom/ press. epx?pressID=2694) SAP NetWeaver 2004 page (http:/ / www. sdn. sap. com/ irj/ sdn/ nw-2004) SAP Netweaver 7.0 page on SAP developer network (http:/ / www. sdn. sap. com/ irj/ sdn/ nw-70) http:/ / www. duet. com/
• Steffen Karch, Loren Heilig: SAP NetWeaver Roadmap. Galileo Press, 2005, ISBN 1-59229-041-8
External links • SAP Netweaver Information (http://www.sap.com/solutions/netweaver/index.epx) • SAP Netweaver Capabilities (http://scn.sap.com/community/netweaver) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome) • SAP's Help Documentation Portal (http://help.sap.com/) • SAP NetWeaver 2004 (http://scn.sap.com/docs/DOC-8490) • SAP NetWeaver 7.0 Related Documents (http://scn.sap.com/community/netweaver/ content?filterID=content~objecttype~objecttype[document]) • Download SAP NetWeaver Whitepapers (http://www.sapebooks.com/cms/WhitePaper/ SAP-Standard-Documents/XI-NetWeaver/View-category.html) • BI Expert (http://www.bi-expertonline.com/) • SAP NetWeaver Magazine (http://netweavermagazine.com/) • NetWeaver.it - SAP NetWeaver Technology News (English/Italian version) (http://www.netweaver.it/) • SAP NetWeaver Integration (http://www.liebsoft.com/SAP_Integration/), overview of SAP NetWeaver integration with privileged identity management software
98
SAP NetWeaver Application Server
SAP NetWeaver Application Server SAP NetWeaver Application Server or SAP Web Application Server is a component of the NetWeaver solution which works as a web application server to SAP solutions. From the SAP point of view the Web AS is the foundation on which most of their product range runs. All ABAP application servers including the message server represent the application layer of the multitier architecture of an ABAP-based SAP System. These application servers execute ABAP applications and communicate with the presentation components, the database, and also with each other, using the message server.
Architecture The architecture of SAP Web Application Server can be separated into 5 areas: Presentation layer In the presentation layer, the user interface can be developed with Java Server Pages (JSP), Business Server Pages (BSP), or with Web Dynpro technology. The underlying business layer provides the business content in Java or ABAP. Business layer The business layer consists of a J2EE certified run-time environment that processes the requests passed from the Internet Communication Manager (ICM) and dynamically generates the responses. The business logic can be written either in ABAP or in Java based on the J2EE standard. Developers can implement business logic and persistence with Enterprise JavaBeans (EJB) using the J2EE environment. Developers can also access the business objects of applications running in the ABAP environment to benefit from their business logic and persistence. Integration layer The local integration engine is an integral part of SAP Web AS and allows instant connection to SAP XI. The local integration engine provides messaging services that exchange messages between the components that are connected in SAP XI. Connectivity layer The Internet Communication Manager (ICM) dispatches user interface requests to the presentation layer and provides a single framework for connectivity using various communication protocols. Currently, modules are available for Hypertext Transfer Protocol (HTTP), HTTPS (extension of HTTP running under the Secure Socket Layer (SSL)), Simple Mail Transfer Protocol (SMTP), Simple Object Access Protocol (SOAP), and Fast Common Gateway Interface (FastCGI). Persistence layer The persistence layer supports database independence and scalable transaction handling. Business logic can be developed completely independent of the underlying database and operating system. Database independence is also made possible by support for open standards. The database interface ensures optimized data access from within the ABAP environment through Open SQL. SAP propagates the outstanding capabilities of Open SQL for ABAP to Open SQL for Java and offers a variety of standard Application Programming Interfaces (APIs) to application programmers, such as SQLJ. Other technologies, such as Java Data Objects (JDO) and Container-Managed Persistence (CMP) for EJB, or the direct use of the Java Database Connectivity (JDBC) API, are also supported.
99
SAP NetWeaver Application Server
Security Authentication The SAP NetWeaver AS can accept multiple forms of authentication: • SAP Logon Ticket with appropriate configuration.[1] • Other single sign-on technology that utilizes x.509 certificates and the combination of Secure Network Communications (SNC) and Secure Socket Layer (SSL) for one standardize authentication platform.
Communications The SAP NetWeaver Application Server's connectivity layer supports HTTPS which is required for encrypted communications via Secure Socket Layer. It is possible to enable SSL using the SAP Cryptographic Library.[2] If a company is running with traditional SAP systems that only uses RFC and DIAG protocols, Secure Network Communications is required for encrypted communications as well[3]
References [1] Configuring SAP Web AS ABAP to Accept Logon Tickets from the J2EE Engine (http:/ / help. sap. com/ erp2005_ehp_04/ helpdata/ EN/ 61/ 42897de269cf44b35f9395978cc9cb/ frameset. htm) [2] Enabling SSL in the SAP NetWeaver Application Server (https:/ / cw. sdn. sap. com/ cw/ docs/ DOC-27593?treeid=DOC-8319) [3] Secure Network Communications (http:/ / help. sap. com/ saphelp_46c/ helpdata/ EN/ 4f/ 992d65446d11d189700000e8322d00/ content. htm)
External links • ABAP Development (http://scn.sap.com/community/abap) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome) • SAP Netweaver Capabilities - Java Development (http://scn.sap.com/community/java) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome)
100
SAP NetWeaver Business Intelligence
SAP NetWeaver Business Intelligence SAP Netweaver Business Warehouse (SAP NetWeaver BW) is the name of the Business Intelligence, analytical, reporting and Data Warehousing solution produced by SAP AG. It was originally named SAP BIW (Business Information Warehouse), then abbreviated to SAP BW, but is now known as "SAP BI" at the end user level. In contrast, "BW" is still used to describe the underlying Data Warehouse Area and Accelerator components. It is often used by companies who run their business on SAP's operational systems. BW is part of the SAP NetWeaver technology platform. Other components of SAP NetWeaver include SAP Enterprise Portal (EP, called SAP NetWeaver Portal as of Release 7.0), Web Application Server (WAS), SAP Process Integration (PI, or previously XI, i.e. eXchange Infrastructure) and Master Data Management (MDM). It also includes end-user reporting tools such as Report Designer, BEx Query Designer, BEx Web Application Designer and BEx Analyzer.
Structure It may be helpful to consider layer that make up the structure of SAP's BI solution: • Extraction, Transformation and Load (ETL) layer - responsible for extracting data from a specific source, applying transformation rules, and loading it into the Data Warehouse Area. • Data Warehouse Area - responsible for storing the information in various types of structures (e.g. Data Store Objects, InfoObjects and multidimensional structures called InfoCubes that follows star schema design). • Reporting - for accessing the information in data warehouse area and presenting it in a user-friendly manner to the analyst or business user. • Planning and analysis - Provides capabilities for the user to run simulations and perform tasks such as budget calculations. SAP's BW solution has a pervasively employed data warehouse and contains a large number of pre-defined business content in the form of InfoCubes, Info Objects, authorization roles, and queries. This allows the ability to leverage SAP's experience and to reduce implementation cycles. The business content can be modified to meet an organization's specific requirements; however, this requires a longer process of customization of the pre-defined elements.
Security User Management The following types of general user profiles exist: • Executives and Knowledge Workers • Information Consumers However, roles and authorizations can be customized significantly. Authentications and Single Sign-On The following are the most common forms of authentication: • • • •
User ID and Password Secure Network Communications (SNC) SAP Logon Ticket Client Certificates (e.g., x.509)
101
SAP NetWeaver Business Intelligence SAP NetWeaver Single Sign-On Environment The SAP NetWeaver Portal is the main entry point within SAP NetWeaver. In order to integrate SAP NetWeaver Business Intelligence, the following two conditions must be satisfied: (note that SAP logon tickets are being used in this example) 1) BI trusts SAP logon tickets from EP because the public key of the EP certificate has been imported into BI. 2) EP trusts SAP logon tickets from BI because the public key of the BI certificate has been imported into EP. Authorizations Companies have to define who has access to which data. An authorization allows a user to perform a certain activity on a certain object in the BI system. There are two authorizations concepts to consider for BI: standard authorizations[1] and analysis authorizations.[2] Communication Channel Security The communication channel used depends on different cases[3] • Front end and application server uses RFC • Application server and application server uses RFC • SAP J2EE Engine and application server uses RFC • SAP router and application server uses RFC • Connection to database uses RFC • Web browser and application server uses HTTP, HTTPS, and SOAP Encrypted Communications RFC communications is not encrypted. In order to encrypt RFC communications, the SAP environment must use Secure Network Communications (SNC) or the SAP Cryptographic Library.[4] SAP recommends the usage of x.509 certificates.[5] Data Storage Data can be protected from being accessed by an authorized end user by assigning analysis authorizations. Data is not protected under BI default settings. Transactional Data is stored in a Datastore or an InfoCube. A Datastore serves as a storage location for transaction data at an atomic level. The data in a datastore is stored in transparent flat database tables. An InfoCube is a set of relational tables arranged according to the star schema: A large fact table in the middle surrounded by several dimension tables.
History The 7.0 version of BW was released in June 2006 as part of the SAP NetWeaver 7.0 (aka 2004s). This release includes many features, such as next-generation reporting and analytical features, major data warehousing enhancements, and a memory resident option for improving query performance called "BI Accelerator" (it has since been renamed BW Accelerator). The BW Accelerator comes as an external appliance, i.e. complete hardware with pre-installed software, and requires a separate licence fee. BW Accelerator is licenced per blade server and 16 GB increments of memory. SAP acquired Business Objects, one of the premier business intelligence software providers, via tender offers executed December 2007-January 2008.[6] SAP has indicated that Business Objects will operate as an independent entity to preserve the principle of application agnosticism, but also promised a tighter integration between SAP BI and Business Objects. A new BI roadmap was recently released by the combined entity.[7] According to SAP, more than 21,000 installations of SAP's BW solution exist.
102
SAP NetWeaver Business Intelligence
References [1] SAP Library: Standard Authorizations (http:/ / help. sap. com/ saphelp_nw70ehp1/ helpdata/ en/ be/ 076f3b6c980c3be10000000a11402f/ content. htm) [2] SAP Library: Analysis Authorizations (http:/ / help. sap. com/ saphelp_nw2004s/ helpdata/ en/ 66/ 019441b8972e7be10000000a1550b0/ content. htm) [3] SAP NetWeaver BI Security Guide (http:/ / www. sdn. sap. com/ irj/ scn/ go/ portal/ prtroot/ docs/ library/ uuid/ 80be74ea-7d55-2a10-22a0-ff664d1454fc?QuickLink=index& overridelayout=true) [4] SAP Cryptographic Library (http:/ / help. sap. com/ saphelp_nw70/ helpdata/ en/ ca/ cbca6b937ea344a9a3be78a128a803/ content. htm) [5] Single Sign-On Technology for SAP Enterprises (http:/ / www. itsecuritystandard. com/ blog/ ?p=1612) [6] SAP Acquires Business Objects in Friendly Takeover (http:/ / www. sap. com/ about/ investor/ bobj/ index. epx) [7] Business Intelligence Platform Roadmap (http:/ / www. sdn. sap. com/ irj/ scn/ go/ portal/ prtroot/ docs/ library/ uuid/ b050d131-7ebc-2a10-57a3-99f7554953bb?QuickLink=index& overridelayout=true& 26345329490306)
Further reading • Shiralkar; Shreekant, Amol Palekar, Bharat (2010). A-Practical-Guide-to-SAP-NetWeaver-Business-Warehouse-7.0 (First Edition ed.). SAP Press. ISBN 978-1-59229-323-0. • McDonald; Wilmsmeier, Dixon, Inmon (2006). Mastering the SAP Business Information Warehouse (Second Edition ed.). Wiley. ISBN 0-471-21971-1. • Mehrwald, Christian (2003). SAP Business Information Warehouse 3. Heidelberg: dpunkt-Verlag. ISBN 3-89864-179-1. • Scott, Peter (2006). SAP Business Explorer (BEx) Tools. SAP Press. ISBN 1-59229-086-8. • Scott, Peter (2009). SAP Business Explorer (BEx) Tools 2nd Edition. SAP Press. ISBN 1-59229-279-8. • Boeke, Joerg (2009). SAP BW 7.x Reporting - Visualize your data. Createspace USA 2009. ISBN 978-1-4486-0626-9. • Shiralkar; Shreekant, Amol Palekar (2012). Supply Chain Analytics With SAP NetWeaver Business Warehouse (First Edition ed.). Tata McGraw-Hill Education. ISBN 978-1-2590-0608-1.
External links • SAP NetWeaver Business Intelligence (http://www.sap.com/solutions/netweaver/components/bi/index.epx) • SAP NetWeaver BI Business Intelligence Basics (http://big4guy.com/web/ sap-netweaver-bi-business-intelligence-basics/136/)
103
SAP NetWeaver Master Data Management
SAP NetWeaver Master Data Management SAP NetWeaver Master Data Management (SAP NW MDM) is a component of SAP's NetWeaver product group and is used as a platform to consolidate, cleanse and synchronise a single version of the truth for master data within a heterogeneous application landscape. It has the ability to distribute internally and externally to SAP and non-SAP applications. SAP MDM is a key enabler of SAP Service-Oriented Architecture. Standard system architecture would consist of a single central MDM server connected to client systems through SAP Exchange Infrastructure using XML documents, although connectivity without SAP XI can also be achieved. There are five standard implementation scenarios: 1. Content Consolidation - centralised cleansing, de-duplication and consolidation, enabling key mapping and consolidated group reporting in SAP BI. No re-distribution of cleansed data. 2. Master Data Harmonisation - as for Content Consolidation, plus re-distribution of cleansed, consolidated master data. 3. Central Master data management - as for Master Data Harmonisation, but all master data is maintained in the central MDM system. No maintenance of master data occurs in the connected client systems. 4. Rich Product Content Management - Catalogue management and publishing. Uses elements of Content Consolidation to centrally store rich content (images, PDF files, video, sound etc.) together with standard content in order to produce product catalogues (web or print). Has standard adapters to export content to Desktop Publishing packages. 5. Global Data Synchronization - provides consistent trade item information exchange with retailers through data hubs (e.g. 1SYNC) Some features (for example, workflow) require custom development out of the box to provide screens for end users to use.
History SAP is currently on its second iteration of MDM software. Facing limited adoption of its initial release, SAP changed direction and in 2004 purchased a small vendor in the PIM space called A2i. This code has become the basis for the currently shipping SAP MDM 5.5, and as such, most analysts consider SAP MDM to be more of a PIM than a general MDM product at this time. SAP NetWeaver MDM 7.1 was released in ramp-up shipment in November 2008 and unrestricted shipment in May 2009. This new version has an enhanced MDM technology foundation to build pre-packaged business scenarios and integration.
Recognition Two of the top three finalists of the Gartner MDM Excellence Awards 2009 [1] were SAP NetWeaver MDM customers: Kraft Foods and BP ranked 1st and 2nd by a team of Gartner analysts, based on their successful implementations of very complex MDM projects.
References [1] http:/ / www. gartner. com/ it/ page. jsp?id=1204916
• SAP NetWeaver Master Data Management 7.1 Help (http://help.sap.com/saphelp_nwmdm71/helpdata/en/43/ D7AED5058201B4E10000000A11466F/frameset.htm) • Gartner MDM Excellence Awards 2009 (http://www.gartner.com/it/page.jsp?id=1204916)
104
SAP NetWeaver Master Data Management
External links • SAP NetWeaver MDM (http://scn.sap.com/community/mdm/netweaver-mdm) discussions, blogs, documents and videos on the SAP Community Network (SCN) (http://scn.sap.com/welcome) • SAP MDM Page (http://www.sap.com/solutions/netweaver/components/mdm/index.epx) • SAP Buys A2i's Technology for Master Data Management (http://www.eweek.com/article2/ 0,1759,1623253,00.asp)
SAP NetWeaver Portal SAP NetWeaver Portal is one of the building blocks in the SAP NetWeaver architecture. With only a Web Browser, users can begin work once they have been authenticated in the portal which offers a single point of access to information, enterprise applications, and services both inside and outside an organization. The Netweaver Portal also provides the tools to manage this knowledge, to analyze and interrelate it, and to share and collaborate. With its coherent interface, role-based content, and personalization features, the portal enables you to focus exclusively on data relevant to your daily decision-making processes. Knowledge Management offers capabilities that everyone can use to distribute and access unstructured information within an organization through a heterogeneous repository landscape. Collaboration brings users, information, and applications together to ensure successful cooperation. All collaboration tools and channels are easily accessible directly from the portal. These tools include collaboration rooms,discussion forums, instant messaging, chat, e-mail, and calendar integration. The Portal is used for different purposes. • • • •
Internationalization Personalization Integration Authorization
SAP NetWeaver Portal is the platform for running Web Dynpro applications or Dyn Page applications created by SAP or custom designed for connecting to some ERP functionality.
Authentication SAP NetWeaver Portal allows different forms of authentication: • username and password • SAP Logon Tickets • X.509 certificates (i.e., Single Sign-On) via Secure Network Communications or Secure Socket Layer
Criticism Independent analyst firm CMS Watch has chronicled SAP Portal's slow embrace of Web 2.0 technologies, such as wikis.[1] CMS Watch customer research also compared NetWeaver Portal somewhat unfavorably to competing offerings.[2]
105
SAP NetWeaver Portal
106
External links • SAP Netweaver Portal [3] discussions, blogs and documents on the SAP Community Network (SCN) [38] • SAP Portal Content [4] from SCN's Portal Content Portfolio • SAP NW Portal [5] - SAP consultant blog & career framework
References [3] http:/ / scn. sap. com/ community/ portal [4] http:/ / scn. sap. com/ docs/ DOC-23059 [5] http:/ / www. ameyablog. com
SAP Business ByDesign SAP Business ByDesign is a fully integrated on-demand enterprise resource planning and business management software product designed for small and medium sized enterprises.[1] It is delivered online as a software as a service offering, and is developed and marketed by SAP AG.[2]
Logo of SAP Business ByDesign
Software as a service (SaaS) SAP Business ByDesign is a software as a service offering from SAP. The solution can be run on a PC with an Internet connection and a web browser, while the software and data are stored on the host servers. Business applications delivered as an on-demand service via a secure Internet connection and a standard Web browser. Like most SaaS solutions, SAP Business ByDesign has a pay-per-use fee instead of an up-front investment.
Solution overview The SAP Business ByDesign solution is designed to track end-to-end business processes across the following scenarios (referred to by SAP as "modules"):[3] • Customer relationship management: This module supports processes that span marketing, sales and service activities. • Financial management: This module can help provide companies with a single, up-to-date view of financial condition by integrating core business processes and financials that span financial and Management accounting and cash flow management. • Project management: This module contains an integrated project management solution. • Supply chain management: This module covers supply chain setup management, supply chain planning and control, and manufacturing warehousing and logistics. • Supplier relationship management: This module focuses on relationships with suppliers, procurement processes aimed at reducing costs, and to perform self-service procurement. • Human resources management: This module spans organizational management, human resources, and employee self-service. • Executive management support: This module is geared towards empowering management with more control over the business and better decision making. With customized real time analytics allow managers to accurately track the most important aspects of business.
SAP Business ByDesign • Compliance management: This module helps companies maintain compliance with changing laws and regulations and to meet regulatory standards.
Key characteristics • • • • • •
Delivered on-demand, managed by SAP Designed for midsize companies or small businesses Monthly subscription with a minimum of 10 users[4] Built-in business analytics Built-in services, e-learning, and support Available in Australia, Austria, Canada, China, Denmark, France, Germany, India, Italy, Mexico, the Netherlands, New Zealand, Spain, Switzerland, the United Kingdom and the United States.
History SAP announced SAP Business ByDesign on 19 September 2007 during an event in New York. The solution was known under the code name "A1S" before.[5] In 2003, exactly 10 years after SAP R/3 was first introduced, SAP decided to go for new architecture, which was called "Ether" and then later "Enterprise SOA." SAP spent four years in R&D developing Business ByDesign as a new foundation for SAP's ERP software.[citation needed]
References External links • For more information you can visit the official SAP Business ByDesign Business Centre (https://www.sme.sap. com/irj/sme/en_GB/home/) • SAP SME Business Solutions (http://www.sap.com/sme/index.epx) • SAP Business ByDesign Community (http://www.sdn.sap.com/irj/bpx/business-bydesign) on SAP Business Process Expert Community (BPX) • For more information about Business ByDesign and implementing the software visit BRADLER - SAP Business ByDesign Experts (http://bradler-gmbh.de/)
107
SAP Advanced Planner and Optimizer
SAP Advanced Planner and Optimizer SAP Advanced Planner and Optimizer is a planning tool which is used to plan and optimize supply chain processes by making use of various modules viz. Demand Planning, Supply Network Planning, Production Planning and Detailed Scheduling, Global Available to Promise and Transportation management & vehicle Scheduling or Transportation Planning & Vehicle Scheduling. Demand Planning APO Demand Planning is a set of functionalities around Demand Management, Statistical Forecasting, Promotion and Life-cycle Planning processes. It is an integral part of any organizations Sales & Operations Planning Process. Demand Planning can primarily be divided in two areas - Data Mart and Demand Management Functionalities. The Data Mart portion of APO DP is basically the SAP BW component with all BW related transactions and objects available. Supply Network Planning (SNP) SNP is a module in APO which does Aggregated Production & Distribution Planning across the locations in a supply chain. It uses tools like Optimizer, Heuristics, Capable-To-Match (CTM) and Deployment for planning the production & distribution across the various locations in the supply network.
External links • SAP APO Help Information [1]
References [1] http:/ / help. sap. com/ saphelp_apo/ helpdata/ en/ 7e/ 63fc37004d0a1ee10000009b38f8cf/ frameset. htm
108
Cloud computing
Cloud computing Cloud computing is the use of computing resources (hardware and software) which are available in a remote location and accessible over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user's data, software and computation. Yet there is a difference compared to the traditional user - server relation. The cloud idea goes beyond the shared computing resource usage. It focuses on maximizing the effectiveness of the shared resources. Cloud resources are Cloud computing logical diagram usually not only shared by multiple users but as well as dynamically re-allocated as per demand. Imagine a cloud compute facility which serves European users during European business hours with a specific application (example email) while the same resources are getting reallocated and serve American users during America's business hours with another application (web server). This approach should maximize the use of computing powers thus reducing environmental damage as well. (less power, airconditioning, rackspace, etc. is required). The term moving cloud also refers to an organization moving away from a traditional capex model (buy the dedicated hardware and depreciate it over a period of time) to the opex model (use a shared cloud infrastructure and pay as you use it) End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at a remote location. Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure.[] Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly adjust resources to meet fluctuating and unpredictable business demand.[][1][2] In the business model using software as a service (SaaS), users are provided access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis. SaaS providers generally price applications using a subscription fee. Proponents claim SaaS allows a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS is that the users' data are stored on the cloud provider's server. As a result, there could be
109
Cloud computing unauthorized access to the data. Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network.[] At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.
History The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe became available in academia and corporations, accessible via thin clients / terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal computational capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing.[] In the 1990s, telecommunications companies, who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extends this boundary to cover servers as well as the network infrastructure.[3] As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform and applications with prioritized access to the CPU and efficiency for the end users.[] John McCarthy opined in the 1960s that "computation may someday be organized as a public utility." [4] Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers.[5] Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture. The development of the Internet from being document centric via semantic data towards more and more services was described as "Dynamic Web".[6] This contribution focused in particular in the need for better meta-data able to describe not only implementation details but also conceptual details of model-based applications. The ubiquitous availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, autonomic, and utility computing have led to a tremendous growth in cloud computing.[7][][] After the dot-com bubble, Amazon played a key role in the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Service (AWS) on a utility computing basis in
110
Cloud computing 2006.[][] In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds.[8] In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment.[9] By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them"[10] and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."[11] On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet.[12] Among the various components of the Smarter Computing foundation, cloud computing is a critical piece.
Similar systems and concepts Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles.[] The main enabling technology for cloud computing is virtualization. Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.[] Users face difficult business problems every day. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way. Cloud computing also leverages concepts from utility computing in order to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery. Cloud computing is a kind of grid computing; it has evolved from grid computing by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.[] Cloud computing shares characteristics with: • Client–server model — Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requesters (clients).[13] • Grid computing — "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks." • Mainframe computer — Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, police and secret intelligence
111
Cloud computing services, enterprise resource planning, and financial transaction processing.[14] • Utility computing — The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."[][15] • Peer-to-peer means distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client–server model). • Cloud gaming—also known as on-demand gaming—is a way of delivering games to computers. Gaming data is stored in the provider's server, so that gaming is independent of client computers used to play the game.
Characteristics Cloud computing exhibits the following key characteristics: • Agility improves with users' ability to re-provision technological infrastructure resources. • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs. • Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure.[16] This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).[] The e-FISCAL project's state of the art repository[17] contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house. • Device and location independence[] enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.[] • Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another. • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for: • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.) • Peak-load capacity increases (users need not engineer for highest possible load-levels) • Utilisation and efficiency improvements for systems that are often only 10–20% utilised.[][18] • Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.[19] • Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time,[][20] without users having to engineer for peak loads.[21][22][23] • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.[] • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels.[24] Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford.[25] However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.
112
Cloud computing • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places. The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics": On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider. Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations). Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. ... Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time. Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. —National Institute of Standards and Technology[]
On-demand self-service On-demand self-service allows users to obtain, configure and deploy cloud services themselves using cloud service catalogues, without requiring the assistance of IT.[26][27] This feature is listed by the National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.[] The self-service requirement of cloud computing prompts infrastructure vendors to create cloud computing templates, which are obtained from cloud service catalogues. Manufacturers of such templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their cloud management platform[28] Hewlett-Packard (HP), which names its templates as HP Cloud Maps[29] RightScale[] and Red Hat, which names its templates CloudForms.[] The templates contain predefined configurations used by consumers to set up cloud services. The templates or blueprints provide the technical information necessary to build ready-to-use clouds.[] Each template includes specific configuration details for different cloud infrastructures, with information about servers for specific tasks such as hosting applications, databases, websites and so on.[] The templates also include predefined Web service, the operating system, the database, security configurations and load balancing.[] Cloud computing consumers use cloud templates to move applications between clouds through a self-service portal. The predefined blueprints define all that an application requires to run in different environments. For example, a template could define how the same application could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat.[30] The user organization benefits from cloud templates because the technical aspects of cloud configurations reside in the templates, letting users to deploy cloud services with a push of a button.[31][32] Cloud templates can also be used by developers to create a catalog of cloud services.[33]
113
Cloud computing
Service models Cloud computing providers offer their services according to several fundamental models:[][] infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models. Other key components in XaaS are described in a comprehensive taxonomy model published in 2009,[34] such as Strategy-as-a-Service, Collaboration-as-a-Service, Business Process-as-a-Service, Database-as-a-Service, etc. In 2012, network as a service (NaaS) and communication as a service (CaaS) were officially included by ITU (International Telecommunication Union) as part of the basic cloud computing models, recognized service categories of a telecommunication-centric cloud ecosystem.[35]
Infrastructure as a service (IaaS) In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines as guests. Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements.) IaaS clouds often offer additional resources such as a virtual-machine disk image library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[] IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis[citation needed]: cost reflects the amount of resources allocated and consumed. Examples of IaaS providers include: Amazon EC2, AirVM, Azure Services Platform, DynDNS, Google Compute Engine, HP Cloud, iland, Joyent, LeaseWeb, Linode, NaviSite, Oracle Infrastructure as a Service, Rackspace, ReadySpace Cloud Services, ReliaCloud, SAVVIS, SingleHop, and Terremark Cloud communications and cloud telephony, rather than replacing local computing infrastructure, replace local telecommunications infrastructure with Voice over IP and other off-site Internet services.
Platform as a service (PaaS) In the PaaS model, cloud providers deliver a computing platform typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying computer and storage resources scale automatically to match application demand such that cloud user does not have to allocate resources manually. Examples of PaaS include: AWS Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, EngineYard, Mendix, OpenShift, Google App Engine, Windows Azure Cloud Services and OrangeScape.
114
Cloud computing
Software as a service (SaaS) In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications are different from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand.[] Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access point. To accommodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, test environment as a service, communication as a service. The pricing model for SaaS applications is typically a monthly or yearly flat fee per user,[] so price is scalable and adjustable if users are added or removed at any point.[36] Examples of SaaS include: Google Apps, Microsoft Office 365, Onlive, GT Nexus, Marketo, Casengo and TradeCard.
Network as a service (NaaS) A category of cloud services where the capability provided to the cloud service user is to use network/transport connectivity services and/or inter-cloud network connectivity services.[] NaaS involves the optimization of resource allocations by considering network and computing resources as a unified whole.[37] Traditional NaaS services include flexible and extended VPN, and bandwidth on demand.[] NaaS concept materialization also includes the provision of a virtual network service by the owners of the network infrastructure to a third party (VNP – VNO).[38][39]
Cloud clients Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screen-sharing technology.
115
Cloud computing
116
Deployment models Public cloud Public cloud applications, storage, and other resources are made available to the general public by a service provider. These services are free or offered on a pay-per-use model. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and offer access only via Internet (direct connectivity is not offered).[]
Cloud computing types
'Italic text'===Community cloud=== Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.[]
Hybrid cloud Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models.[] Such composition expands deployment options for cloud services, allowing IT organizations to use public cloud computing resources to meet temporary needs.[40] This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.[] Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for extra compute resources when they are needed.[41] Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.[42] By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault tolerance combined with locally immediate usability without dependency on internet connectivity. Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based cloud infrastructure. Hybrid clouds lack the flexibility, security and certainty of in-house applications.[43] Hybrid cloud provides the flexibility of in house applications with the fault tolerance and scalability of cloud based services.
Private cloud Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.[] Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities.[44] They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management,[45] essentially "[lacking] the economic model that makes cloud computing such an
Cloud computing
117
intriguing concept".[46][47]
Comparison for SaaS Public cloud Initial cost Running cost
Typically zero
Typically high
Predictable
Unpredictable
Customization Impossible Privacy
Possible
No (Host has access to the data) Yes
Single sign-on Impossible Scaling up
Private cloud
Possible
Easy while within defined limits Laborious but no limits
Architecture Cloud architecture,[48] the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.
The Intercloud Cloud computing sample architecture The Intercloud[49] is an interconnected [][50] global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based.[51][52][53]
Cloud engineering Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialisation, standardisation, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.
Issues Threats and opportunities of the cloud 56% of European decision-makers estimate that the cloud is a priority between 2013 and 2014.[54] The cloud budget should reach 30% of the overall IT budget.[citation needed] But several deterrents to the cloud remain. Among them, are: reliability, availability of services and data, security, complexity, costs, regulations and legal issues, performance, migration, reversion, the lack of standards, and limited customization. The cloud also offers several
Cloud computing strong points, however: infrastructure flexibility, faster deployment of applications and data, cost control, adaptation of cloud resources to real needs, improved productivity, etc. The early 2010s cloud market is dominated by software and services in SaaS mode and IaaS (infrastructure), especially the private cloud. PaaS and the public cloud are further back.
Privacy Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to control—and thus, to monitor at will—communication between host company and end user, and access user data (with or without permission). Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million telephone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity.[55] A cloud service provider (CSP) can complicate data privacy because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud service.[] CSP operations, customer or tenant data may not remain on the same system, or in the same data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal environment, providers such as Amazon still cater to major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones."[56] Cloud computing poses privacy concerns because the service provider may access the data that is on the cloud at any point in time. They could accidentally or deliberately alter or even delete information.[57] Postage and delivery services company Pitney Bowes launched Volly, a cloud-based, digital mailbox service to leverage its communication management assets. They also faced the technical challenge of providing strong data security and privacy. However, they were able to address the same concern by applying customized, application-level security, including encryption.[58]
Compliance To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes that are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA"[59][60] and Rackspace Cloud or QubeSpace are able to claim PCI compliance.[61] Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely.[62] Providers typically make this information available on request, under non-disclosure agreement.[63][64] Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU regulations on export of personal data.[65] U.S. Federal Agencies have been directed by the Office of Management and Budget to use a process called FedRAMP (Federal Risk and Authorization Management Program) to assess and authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to federal agency Chief Information Officers on December 8, 2011 defining how federal agencies should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security controls specifically selected to provide protection in cloud environments. A subset has been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief Information Officers from DoD, DHS and GSA. The JAB is responsible for establishing accreditation standards for 3rd party organizations who perform the assessments of cloud solutions. The JAB also reviews authorization packages, and may grant provisional authorization (to operate). The federal agency consuming the service still has final responsibility for final authority to operate.[66]
118
Cloud computing A multitude of laws and regulations have forced specific compliance requirements onto many companies that collect, generate or store data. These policies may dictate a wide array of data storage policies, such as how long information must be retained, the process used for deleting data, and even certain recovery plans. Below are some examples of compliance laws or regulations. • In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a contingency plan that includes, data backups, data recovery, and data access during emergencies. • The privacy laws of the Switzerland demand that private data, including emails, be physically stored in the Switzerland. • In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a Business contingency plan that includes policies for data storage. In a virtualized cloud computing environment, customers may never know exactly where their data is stored. In fact, data may be stored across multiple data centers in an effort to improve reliability, increase performance, and provide redundancies. This geographic dispersion may make it more difficult to ascertain legal jurisdiction if disputes arise.[]
Legal As with other changes in the landscape of computing, certain legal issues arise with cloud computing, including trademark infringement, security concerns and sharing of proprietary data resources. The Electronic Frontier Foundation has criticized the United States government for considering during the Megaupload seizure process that people lose property rights by storing data on a cloud computing service.[67] One important but not often mentioned problem with cloud computing is the problem of who is in "possession" of the data. If a cloud company is the possessor of the data, the possessor has certain legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the data. Many Terms of Service agreements are silent on the question of ownership.[68] These legal issues are not confined to the time period in which the cloud based application is actively being used. There must also be consideration for what happens when the provider-customer relationship ends. In most cases, this event will be addressed before an application is deployed to the cloud. However, in the case of provider insolvencies or bankruptcy the state of the data may become blurred.[]
Vendor lock-in Because cloud computing is still relatively new, standards are still being developed.[69] Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering.[69] This can make migrating off a proprietary cloud platform prohibitively complicated and expensive.[69] Three types of vendor lock-in can occur with cloud computing:[70] • Platform lock-in: cloud services tend to be built on one of several possible virtualization platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform to a cloud provider using a different platform could be very complicated. • Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the data once it lives on a cloud platform, are not yet developed, which could make it complicated if cloud computing users ever decide to move data off of a cloud vendor's platform. • Tools lock-in: if tools built to manage a cloud environment are not compatible with different kinds of both virtual and physical infrastructure, those tools will only be able to manage data or apps that live in the vendor's particular cloud environment.
119
Cloud computing Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models.[71] The absence of vendor lock-in lets cloud administrators select his or her choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise.[72] A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers.[73] Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors.[74] Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other.[75] The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations.[75] This has been described as a problem of vendor lock-in.[75] The solution to this is for clouds to adopt common standards.[75] Heterogeneous cloud computing differs from homogeneous clouds, which have been described as those using consistent building blocks supplied by a single vendor.[76] Intel General Manager of high-density computing, Jason Waxman, is quoted as saying that a homogenous system of 15,000 servers would cost $6 million more in capital expenditure and use 1 megawatt of power.[76]
Open source Open-source software has provided the foundation for many cloud computing implementations, prominent examples being the Hadoop framework[77] and VMware's Cloud Foundry.[78] In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to run over a network.[79]
Open standards Most cloud providers expose APIs that are typically well-documented (often under a Creative Commons license[80]) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, with a view to delivering interoperability and portability.[81] As of November 2012, the Open Standard with broadest industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now governed by the OpenStack Foundation.[82] OpenStack supporters include AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now VMware.[83]
Security As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model can differ widely from those of traditional architectures.[] An alternative perspective on the topic of cloud security is that this is but another, although quite broad, case of "applied security" and that similar security principles that apply in shared multi-user mainframe security models apply with cloud security.[] The relative security of cloud computing services is a contentious issue that may be delaying its adoption.[84] Physical control of the Private Cloud equipment is more secure than having the equipment off site and under someone else's control. Physical control and the ability to visually inspect data links and access ports is required in order to ensure data links are not compromised. Issues barring the adoption of cloud computing are due in large part to the private and public sectors' unease surrounding the external management of security-based services. It is the very nature of cloud computing-based services, private or public, that promote external management of provided services. This delivers great incentive to cloud computing service providers to prioritize building and maintaining
120
Cloud computing strong management of secure services.[85] Security issues have been categorised into sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious insiders, management console security, account control, and multi-tenancy issues. Solutions to various cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers, standardisation of APIs, and improving virtual machine support and legal support.[][86][87] Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase, it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying challenges and risks in cloud computing increase the threat of data compromise. To mitigate the threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the system encrypts to protect data, establishes trusted foundation to secure the platform and infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns must be addressed to maintain trust in cloud computing technology.[citation needed]
Sustainability Although cloud computing is often assumed to be a form of green computing, no published study substantiates this assumption.[88] Citing the servers' effects on the environmental effects of cloud computing, in areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. (The same holds true for "traditional" data centers.) Thus countries with favorable conditions, such as Finland,[89] Sweden and Switzerland,[90] are trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from energy-aware scheduling and server consolidation.[91] However, in the case of distributed clouds over data centers with different source of energies including renewable source of energies, a small compromise on energy consumption reduction could result in high carbon footprint reduction.[92]
Abuse As with privately purchased hardware, customers can purchase the services of cloud computing for nefarious purposes. This includes password cracking and launching attacks using the purchased services.[93] In 2009, a banking trojan illegally used the popular Amazon service as a command and control channel that issued software updates and malicious instructions to PCs that were infected by the malware.[94]
IT governance The introduction of cloud computing requires an appropriate IT governance model to ensure a secured computing environment and to comply with all relevant organizational information technology policies.[95][96] As such, organizations need a set of capabilities that are essential when effectively implementing and managing cloud services, including demand management, relationship management, data security management, application lifecycle management, risk and compliance management.[97] A danger lies with the explosion of companies joining the growth in cloud computing by becoming providers. However, many of the infrastructural and logistical concerns regarding the operation of cloud computing businesses are still unknown. This over-saturation may have ramifications for the industry as whole.[98]
Consumer end storage The increased use of cloud computing could lead to a reduction in demand for high storage capacity consumer end devices, due to cheaper low storage devices that stream all content via the cloud becoming more popular.[citation needed] In a Wired article, Jake Gardner explains that while unregulated usage is beneficial for IT and tech moguls like Amazon, the anonymous nature of the cost of consumption of cloud usage makes it difficult for business to evaluate and incorporate it into their business plans.[98]
121
Cloud computing
Ambiguity of terminology Outside of the information technology and software industry, the term "cloud" can be found to reference a wide range of services, some of which fall under the category of cloud computing, while others do not. The cloud is often used to refer to a product or service that is discovered, accessed and paid for over the Internet, but is not necessarily a computing resource. Examples of service that are sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing, crowd funding, cloud manufacturing.[99][100]
Alternatives as normal user Another possibility of cloud storage is to create your own cloud and keep the data on your own server (KYOD). Many manufacturers of home NAS devices provides this functionality out of the box. If you want to keep your data accessible at all times but don't trust the providers this is an viable option. The cons are that you have to secure the data on your end. The pros are that you know where your data is located. [101]
Origin of the cloud symbol The origin of the term cloud computing was derived from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems. The word cloud was used as a metaphor for the Internet, based on the standardized use of a cloud-like shape to denote a network on telephony schematics and later to depict the Internet in computer network diagrams. The cloud symbol was used to represent the Internet as early as 1994.[102][103] Servers were then shown connected to, but external to, the cloud symbol.
Research Many universities, vendors and government organizations are investing in research around the topic of cloud computing:[104][105] • In October 2007, the Academic Cloud Computing Initiative (ACCI) was announced as a multi-university project designed to enhance students' technical knowledge to address the challenges of cloud computing.[106] • In April 2009, UC Santa Barbara released the first open source platform-as-a-service, AppScale, which is capable of running Google App Engine applications at scale on a multitude of infrastructures. • In April 2009, the St Andrews Cloud Computing Co-laboratory was launched, focusing on research in the important new area of cloud computing. Unique in the UK, StACC aims to become an international centre of excellence for research and teaching in cloud computing and provides advice and information to businesses interested in cloud-based services.[107] • In October 2010, the TClouds (Trustworthy Clouds) project was started, funded by the European Commission's 7th Framework Programme. The project's goal is to research and inspect the legal foundation and architectural design to build a resilient and trustworthy cloud-of-cloud infrastructure on top of that. The project also develops a prototype to demonstrate its results.[108] • In December 2010, the TrustCloud research project [][109] was started by HP Labs Singapore to address transparency and accountability of cloud computing via detective, data-centric approaches[110] encapsulated in a five-layer TrustCloud Framework. The team identified the need for monitoring data life cycles and transfers in the cloud,[] leading to the tackling of key cloud computing security issues such as cloud data leakages, cloud accountability and cross-national data transfers in transnational clouds. • In June 2011, two Indian Universities i.e. University of Petroleum and Energy Studies and University of Technology and Management introduced cloud computing as a subject in India, in collaboration with IBM.[111] • In July 2011, the High Performance Computing Cloud (HPCCLoud) project was kicked-off aiming at finding out the possibilities of enhancing performance on cloud environments while running the scientific applications -
122
Cloud computing development of HPCCLoud Performance Analysis Toolkit which was funded by CIM-Returning Experts Programme - under the coordination of Prof. Dr. Shajulin Benedict. • In June 2011, the Telecommunications Industry Association developed a Cloud Computing White Paper, to analyze the integration challenges and opportunities between cloud services and traditional U.S. telecommunications standards.[112] • In February 2013, the BonFIRE project launched a multi-site cloud experimentation and testing facility. The facility provides transparent access to cloud resources, with the control and observability necessary to engineer future cloud technologies, in a way that is not restricted, for example, by current business models.[113]
References [4] http:/ / www. technologyreview. com/ news/ 425623/ the-cloud-imperative/ [6] Andreas Tolk. 2006. What Comes After the Semantic Web - PADS Implications for the Dynamic Web. 20th Workshop on Principles of Advanced and Distributed Simulation (PADS '06). IEEE Computer Society, Washington, DC, USA [8] B Rochwerger, J Caceres, RS Montero, D Breitgand, E Elmroth, A Galis, E Levy, IM Llorente, K Nagin, Y Wolfsthal, E Elmroth, J Caceres, M Ben-Yehuda, W Emmerich, F Galan. "The RESERVOIR Model and Architecture for Open Federated Cloud Computing", IBM Journal of Research and Development, Vol. 53, No. 4. (2009) [9] D Kyriazis, A Menychtas, G Kousiouris, K Oberle, T Voith, M Boniface, E Oliveros, T Cucinotta, S Berger, "A Real-time Service Oriented Infrastructure", International Conference on Real-Time and Embedded Systems (RTES 2010), Singapore, November 2010 [10] Keep an eye on cloud computing (http:/ / www. networkworld. com/ newsletters/ itlead/ 2008/ 070708itlead1. html), Amy Schurr, Network World, 2008-07-08, citing the Gartner report, "Cloud Computing Confusion Leads to Opportunity". Retrieved 2009-09-11. [11] Gartner Says Worldwide IT Spending On Pace to Surpass Trillion in 2008 (http:/ / www. gartner. com/ it/ page. jsp?id=742913), Gartner, 2008-08-18. Retrieved 2009-09-11. [40] Metzler, Jim; Taylor, Steve. (2010-08-23) "Cloud computing: Reality vs. fiction," Network World. (http:/ / www. networkworld. com/ newsletters/ frame/ 2010/ 082310wan1. html) [41] Rouse, Margaret. "Definition: Cloudbursting," May 2011. SearchCloudComputing.com. (http:/ / searchcloudcomputing. techtarget. com/ definition/ cloud-bursting) [42] Vizard, Michael. "How Cloudbursting 'Rightsizes' the Data Center", (2012-06-21). Slashdot. (http:/ / slashdot. org/ topic/ datacenter/ how-cloudbursting-rightsizes-the-data-center/ ) [69] McKendrick, Joe. (2011-11-20) "Cloud Computing's Vendor Lock-In Problem: Why the Industry is Taking a Step Backward," Forbes.com (http:/ / www. forbes. com/ sites/ joemckendrick/ 2011/ 11/ 20/ cloud-computings-vendor-lock-in-problem-why-the-industry-is-taking-a-step-backwards/ ) [70] Hinkle, Mark. (2010-6-9) "Three cloud lock-in considerations", Zenoss Blog (http:/ / community. zenoss. org/ blogs/ zenossblog/ 2010/ 06/ 09/ three-cloud-lock-in-considerations) [71] Staten, James (2012-07-23). "Gelsinger brings the 'H' word to VMware". ZDNet. (http:/ / www. zdnet. com/ gelsinger-brings-the-h-word-to-vmware-7000001416/ ) [72] Vada, Eirik T. (2012-06-11) "Creating Flexible Heterogeneous Cloud Environments", page 5, Network and System Administration, Oslo University College (https:/ / www. duo. uio. no/ bitstream/ handle/ 123456789/ 34153/ thesis. pdf?sequence=1) [73] Geada, Dave. (June 2, 2011) "The case for the heterogeneous cloud," Cloud Computing Journal (http:/ / cloudcomputing. sys-con. com/ node/ 1841850) [74] Burns, Paul (2012-01-02). "Cloud Computing in 2012: What's Already Happening". Neovise. (http:/ / www. neovise. com/ cloud-computing-in-2012-what-is-already-happening) [75] Livenson, Ilja. Laure, Erwin. (2011) "Towards transparent integration of heterogeneous cloud storage platforms", pages 27–34, KTH Royal Institute of Technology, Stockholm, Sweden. (http:/ / dl. acm. org/ citation. cfm?id=1996020) [76] Gannes, Liz. GigaOm, "Structure 2010: Intel vs. the Homogeneous Cloud," June 24, 2010. (http:/ / gigaom. com/ 2010/ 06/ 24/ structure-2010-intel-vs-the-homogeneous-cloud/ ) [80] GoGrid Moves API Specification to Creative Commons (http:/ / www. gogrid. com/ company/ press-releases/ gogrid-moves-api-specification-to-creativecommons. php) [89] Finland – First Choice for Siting Your Cloud Computing Data Center. (http:/ / www. fincloud. freehostingcloud. com/ ). Retrieved 4 August 2010. [90] Swiss Carbon-Neutral Servers Hit the Cloud. (http:/ / www. greenbiz. com/ news/ 2010/ 06/ 30/ swiss-carbon-neutral-servers-hit-cloud). Retrieved 4 August 2010. [91] Berl, Andreas, et al., Energy-Efficient Cloud Computing (http:/ / comjnl. oxfordjournals. org/ content/ 53/ 7/ 1045. short), The Computer Journal, 2010. [92] Farrahi Moghaddam, Fereydoun, et al., Low Carbon Virtual Private Clouds (http:/ / ieeexplore. ieee. org/ search/ srchabstract. jsp?tp=& arnumber=6008718), IEEE Cloud 2011.
123
Cloud computing [95] Hsu, Wen-Hsi L., "Conceptual Framework of Cloud Computing Governance Model - An Education Perspective", IEEE Technology and Engineering Education (ITEE), Vol 7, No 2 (2012) (http:/ / www. ewh. ieee. org/ soc/ e/ sac/ itee/ index. php/ meem/ article/ view/ 240) [96] Stackpole, Beth, "Governance Meets Cloud: Top Misconceptions", InformationWeek, 7 May 2012 (http:/ / www. informationweek. com/ cloud-computing/ infrastructure/ governance-meets-cloud-top-misconception/ 232901483) [97] Joha, A and M. Janssen (2012) "Transformation to Cloud Services Sourcing: Required IT Governance Capabilities", ICST Transactions on e-Business 12(7-9) (http:/ / eudl. eu/ pdf/ 10. 4108/ eb. 2012. 07-09. e4) [98] Beware: 7 Sins of Cloud Computing (http:/ / www. wired. com/ insights/ 2013/ 01/ beware-7-sins-of-cloud-computing) [99] S. Stonham and S. Nahalkova (2012) "What is the Cloud and how can it help my business?" (http:/ / www. ovasto. com/ 2013/ 01/ what-is-the-cloud-how-can-the-cloud-help-my-business/ ) [100] S. Stonham and S. Nahalkova (2012), Whitepaper "Tomorrow Belongs to the Agile (PDF)" (http:/ / www. ovasto. com/ full-service-business-marketing-consultancy/ strategic-agility-and-the-cloud/ ) [101] http:/ / personal-clouds. org/ wiki/ Main_Page [102] Figure 8, "A network 70 is shown schematically as a cloud", US Patent 5,485,455, column 17, line 22, filed Jan 28, 1994 [103] Figure 1, "the cloud indicated at 49 in Fig. 1.", US Patent 5,790,548, column 5 line 56-57, filed April 18, 1996
External links • The NIST Definition of Cloud Computing (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145. pdf). Peter Mell and Timothy Grance, NIST Special Publication 800-145 (September 2011). National Institute of Standards and Technology, U.S. Department of Commerce. • Guidelines on Security and Privacy in Public Cloud Computing (http://nvlpubs.nist.gov/nistpubs/sp/2011/ sp800-144.pdf). Wayne Jansen and Timothy Grance, NIST Special Publication 800-144 (December 2011). National Institute of Standards and Technology, U.S. Department of Commerce. • Cloud Computing - Benefits, risks and recommendation for information security (http://www.enisa.europa.eu/ activities/risk-management/files/deliverables/cloud-computing-risk-assessment). Daniele Cattedu and Giles Hobben, European Network and Information Security Agency 2009. • Fighting cyber crime and protecting privacy in the cloud. European Parliament - Directorate-General for Internal Policies. 2012 (http://www.europarl.europa.eu/committees/en/studiesdownload. html?languageDocument=EN&file=79050) • Cloud Computing: What are the Security Implications?: Hearing before the Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies of the Committee on Homeland Security, House of Representatives, One Hundred Twelfth Congress, First Session, October 6, 2011 (http://purl.fdlp.gov/GPO/ gpo32975) • Cloud Computing represents both a significant opportunity and a potential challenge (http://ptop.co.uk/ solutions-and-technology/cloud-computing/) • Cloud and Datacenter Solution Hub (http://technet.microsoft.com/en-us/cloud/private-cloud) on Microsoft TechNet
124
Software as a service
Software as a service Software as a service (SaaS, pronounced sæs or sɑs[1]), sometimes referred to as "on-demand software" supplied by ISVs or "Application-Service-Providers" (ASPs), [2] is a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for many business applications, including Office & Messaging software, DBMS software, Management software, CAD software, Development software, Virtualization ,[3]accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, human resource management (HRM), content management (CM) and service desk management.[4] SaaS has been incorporated into the strategy of all leading enterprise software companies.[5] One of the biggest selling points for these companies is the potential to reduce IT support costs by outsourcing hardware and software maintenance and support to the SaaS provider.[6] According to a Gartner Group estimate,[7] SaaS sales in 2010 reached $10 billion, and were projected to increase to $12.1bn in 2011, up 20.7% from 2010. Gartner Group estimates that SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected $21.3bn. Customer relationship management (CRM) continues to be the largest market for SaaS. SaaS revenue within the CRM market was forecast to reach $3.8bn in 2011, up from $3.2bn in 2010.[8] The term "software as a service" (SaaS) is considered to be part of the nomenclature of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), and backend as a service (BaaS).[9]
History Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers. The expansion of the Internet during the 1990s brought about a new class of centralized computing, called Application Service Providers (ASP). ASPs provided businesses with the service of hosting and managing specialized business applications, with the goal of reducing costs through central administration and through the solution provider's specialization in a particular business application. Two of the world's pioneers and largest ASPs were USI, which was headquartered in the Washington, D.C. area, and Futurelink Corporation, headquartered in Orange County California. Software as a service essentially extends the idea of the ASP model. The term Software as a Service (SaaS), however, is commonly used in more specific settings: • whereas most initial ASPs focused on managing and hosting third-party independent software vendors' software, as of 2012[10] SaaS vendors typically develop and manage their own software • whereas many initial ASPs offered more traditional client-server applications, which require installation of software on users' personal computers, contemporary SaaS solutions rely predominantly on the Web and only require an internet browser to use • whereas the software architecture used by most initial ASPs mandated maintaining a separate instance of the application for each business, as of 2012[10] SaaS solutions normally utilize a multi-tenant architecture, in which the application serves multiple businesses and users, and partitions its data accordingly The SAAS acronym allegedly first appeared in an article called "Strategic Backgrounder: Software As A Service", internally published in February 2001 by the Software & Information Industry's (SIIA) eBusiness Division.[11] DbaaS (Database as a Service) has emerged as a sub-variety of SaaS.[12]
125
Software as a service
Distribution The Cloud (or SaaS) model has no physical need for indirect distribution since it is not distributed physically and is deployed almost instantaneously. The first wave of SaaS companies built their own economic model without including partner remuneration in their pricing structure, (except when there were certain existing affiliations). It has not been easy for traditional software publishers to enter into the SaaS model. Firstly, because the SaaS model does not bring them the same income structure, secondly, because continuing to work with a distribution network was decreasing their profit margins and was damaging to the competitiveness of their product pricing. Today a landscape is taking shape with SaaS and managed service players who combine the indirect sales model with their own existing business model, and those who seek to redefine their role within the 3.0 IT economy [13] .
Pricing Unlike traditional software which is conventionally sold as a perpetual license with an up-front cost (and an optional ongoing support fee), SaaS providers generally price applications using a subscription fee, most commonly a monthly fee or an annual fee. Consequently, the initial setup cost for SaaS is typically lower than the equivalent enterprise software. SaaS vendors typically price their applications based on some usage parameters, such as the number of users using the application. However, because in a SaaS environment customers' data reside with the SaaS vendor, opportunities also exist to charge per transaction, event, or other unit of value. The relatively low cost for user provisioning (i.e., setting up a new customer) in a multi-tenant environment enables some SaaS vendors to offer applications using the freemium model. In this model, a free service is made available with limited functionality or scope, and fees are charged for enhanced functionality or larger scope. Some other SaaS applications are completely free to users, with revenue being derived from alternate sources such as advertising. A key driver of SaaS growth is SaaS vendors' ability to provide a price that is competitive with on-premises software. This is consistent with the traditional rationale for outsourcing IT systems, which involves applying economies of scale to application operation, i.e., an outside service provider may be able to offer better, cheaper, more reliable applications.
Notable service providers • • • • • • • • • • • • • • • •
Amazon Web Services Concur ENFOS Google Apps HP Cloud Services HubSpot iCloud Infor Locus Technologies Meltwater Group Microsoft Office 365 Oracle Salesforce ServiceSource Windows Azure Workday
• Zoho Office Suite
126
Software as a service
Architecture The vast majority of SaaS solutions are based on a multi-tenant architecture. With this model, a single version of the application, with a single configuration (hardware, network, operating system), is used for all customers ("tenants"). To support scalability, the application is installed on multiple machines (called horizontal scaling). In some cases, a second version of the application is set up to offer a select group of customers with access to pre-release versions of the applications (e.g., a beta version) for testing purposes. This is contrasted with traditional software, where multiple physical copies of the software—each potentially of a different version, with a potentially different configuration, and oftentimes customized—are installed across various customer sites. While an exception rather than the norm, some SaaS solutions do not use multi-tenancy, or use other mechanisms—such as virtualization—to cost-effectively manage a large number of customers in place of multi-tenancy.[14] Whether multi-tenancy is a necessary component for software-as-a-service is a topic of controversy.[15]
Characteristics While not all software-as-a-service applications share all traits, the characteristics below are common among many SaaS applications:
Configuration and customization SaaS applications similarly support what is traditionally known as application customization. In other words, like traditional enterprise software, a single customer can alter the set of configuration options (a.k.a., parameters) that affect its functionality and look-and-feel. Each customer may have its own settings (or: parameter values) for the configuration options. The application can be customized to the degree it was designed for based on a set of predefined configuration options. For example: to support customers' common need to change an application's look-and-feel so that the application appears to be having the customer's brand (or—if so desired—co-branded), many SaaS applications let customers provide (through a self service interface or by working with application provider staff) a custom logo and sometimes a set of custom colors. The customer cannot, however, change the page layout unless such an option was designed for.
Accelerated feature delivery SaaS applications are often updated more frequently than traditional software,[16] in many cases on a weekly or monthly basis. This is enabled by several factors: • • • •
The application is hosted centrally, so an update is decided and executed by the provider, not by customers. The application only has a single configuration, making development testing faster. The application vendor has access to all customer data, expediting design and regression testing. The solution provider has access to user behavior within the application (usually via web analytics), making it easier to identify areas worthy of improvement.
Accelerated feature delivery is further enabled by agile software development methodologies.[17] Such methodologies, which have evolved in the mid-1990s, provide a set of software development tools and practices to support frequent software releases.
127
Software as a service
Open integration protocols Since SaaS applications cannot access a company's internal systems (databases or internal services), they predominantly offer integration protocols and application programming interfaces (APIs) that operate over a wide area network. Typically, these are protocols based on HTTP, REST, SOAP and JSON. The ubiquity of SaaS applications and other Internet services and the standardization of their API technology has spawned development of mashups, which are lightweight applications that combine data, presentation and functionality from multiple services, creating a compound service. Mashups further differentiate SaaS applications from on-premises software as the latter cannot be easily integrated outside a company's firewall.
Collaborative (and "social") functionality Inspired by the success of online social networks and other so-called web 2.0 functionality, many SaaS applications offer features that let its users collaborate and share information. For example, many project management applications delivered in the SaaS model offer—in addition to traditional project planning functionality—collaboration features letting users comment on tasks and plans and share documents within and outside an organization. Several other SaaS applications let users vote on and offer new feature ideas. While some collaboration-related functionality is also integrated into on-premises software, (implicit or explicit) collaboration between users or different customers is only possible with centrally hosted software.
Adoption drivers Several important changes to the software market and technology landscape have facilitated acceptance and growth of SaaS solutions: • The growing use of web-based user interfaces by applications, along with the proliferation of associated practices (e.g., web design), continuously decreased the need for traditional client-server applications. Consequently, traditional software vendor's investment in software based on fat clients has become a disadvantage (mandating ongoing support), opening the door for new software vendors offering a user experience perceived as more "modern". • The standardization of web page technologies (HTML, JavaScript, CSS), the increasing popularity of web development as a practice, and the introduction and ubiquity of web application frameworks like Ruby on Rails or languages like PHP gradually reduced the cost of developing new SaaS solutions, and enabled new solution providers to come up with competitive solutions, challenging traditional vendors. • The increasing penetration of broadband Internet access enabled remote centrally hosted applications to offer speed comparable to on-premises software. • The standardization of the HTTPS protocol as part of the web stack provided universally available lightweight security that is sufficient for most everyday applications. • The introduction and wide acceptance of lightweight integration protocols such as REST and SOAP enabled affordable integration between SaaS applications (residing in the cloud) with internal applications over wide area networks and with other SaaS applications.
128
Software as a service
Adoption challenges Some limitations slow down the acceptance of SaaS and prohibit it from being used in some cases: • Since data are being stored on the vendor’s servers, data security becomes an issue.[] • SaaS applications are hosted in the cloud, far away from the application users. This introduces latency into the environment; so, for example, the SaaS model is not suitable for applications that demand response times in the milliseconds. • Multi-tenant architectures, which drive cost efficiency for SaaS solution providers, limit customization of applications for large clients, inhibiting such applications from being used in scenarios (applicable mostly to large enterprises) for which such customization is necessary. • Some business applications require access to or integration with customer's current data. When such data are large in volume or sensitive (e.g., end users' personal information), integrating them with remotely hosted software can be costly or risky, or can conflict with data governance regulations. • Constitutional search/seizure warrant laws do not protect all forms of SaaS dynamically stored data. The end result is that a link is added to the chain of security where access to the data, and, by extension, misuse of these data, are limited only by the assumed honesty of 3rd parties or government agencies able to access the data on their own recognizance.[18][19][20][21] • Switching SaaS vendors may involve the slow and difficult task of transferring very large data files over the Internet. • Organizations that adopt SaaS may find they are forced into adopting new versions, which might result in unforeseen training costs or an increase in probability that a user might make an error. • Relying on an Internet connection means that data are transferred to and from a SaaS firm at Internet speeds, rather than the potentially higher speeds of a firm’s internal network.[22] The standard model also has limitations: • Compatibility with hardware, other software, and operating systems.[23] • Licensing and compliance problems (unauthorized copies with the software program boating the organization). • Maintenance, support, and patch revision processes.
Data escrow Software as a service data escrow is the process of keeping a copy of critical software-as-a-service application data with an independent third party. Similar to source code escrow, where critical software source code is stored with an independent third party, SaaS data escrow is the same logic applied to the data within a SaaS application. It allows companies to protect and insure all the data that reside within SaaS applications, protecting against data loss.[24] There are many and varied reasons for considering SaaS data escrow including concerns about vendor bankruptcy, unplanned service outages and potential data loss or corruption. Many businesses are also keen to ensure that they are complying with their own data governance standards or want improved reporting and business analytics against their SaaS data. A research conducted by Clearpace Software Ltd. into the growth of SaaS showed that 85 percent of the participants wanted to take a copy of their SaaS data. A third of these participants wanted a copy on a daily basis.[25]
129
Software as a service
Criticism One notable criticism of SaaS comes from Richard Stallman of the Free Software Foundation, who considers the use of SaaS to be a violation of the principles of free software.[26] According to Stallman: • With SaaS, the users do not have a copy of the executable file: it is on the server, where the users can't see or touch it. Thus it is impossible for them to ascertain what it really does, and impossible to change it. SaaS inherently gives the server operator the power to change the software in use, or the users' data being operated on. • Users must send their data to the server in order to use them. This has the same effect as spyware: the server operator gets the data. She/he gets it with no special effort, by the nature of SaaS. This gives the server operator unjust power over the user.
References [10] http:/ / en. wikipedia. org/ w/ index. php?title=Software_as_a_service& action=edit [19] Adhikari, Richard. "Why Richard Stallman Takes No Shine to Chrome." (http:/ / www. technewsworld. com/ story/ Why-Richard-Stallman-Takes-No-Shine-to-Chrome-71469. html) LinuxInsider, 15 December 2010. [21] Examples:
SOAP SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks. It relies on XML Information Set for its message format, and usually relies on other Application Layer protocols, most notably Hypertext Transfer Protocol (HTTP) or Simple Mail Transfer Protocol (SMTP), for message negotiation and transmission.
Characteristics SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built. This XML based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing procedure calls and responses. SOAP has three major characteristics: Extensibility (security and WS-routing are among the extensions under development), Neutrality (SOAP can be used over any transport protocol such as HTTP, SMTP, TCP, or JMS) and Independence (SOAP allows for any programming model). As an example of how SOAP procedures can be used, a SOAP message could be sent to a web site that has web services enabled, such as a real-estate price database, with the parameters needed for a search. The site would then return an XML-formatted document with the resulting data, e.g., prices, location, features. With the data being returned in a standardized machine-parsable format, it can then be integrated directly into a third-party web site or application. The SOAP architecture consists of several layers of specifications for: message format, Message Exchange Patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).[citation needed]
130
SOAP
131
History SOAP was designed as an object-access protocol in 1998 by Dave Winer, Don Box, Bob Atkinson, and Mohsen Al-Ghosein for Microsoft, where Atkinson and Al-Ghosein were working at the time.[1] The SOAP specification [2] is currently maintained by the XML Protocol Working Group [3] of the World Wide Web Consortium. SOAP originally stood for 'Simple Object Access Protocol' but this acronym was dropped with Version 1.2 of the standard.[4] Version 1.2 became a W3C recommendation on June 24, 2003. The acronym is sometimes confused with SOA, which stands for Service-oriented architecture, but the acronyms are unrelated. After SOAP was first introduced, it became the underlying layer of a more complex set of Web Services, based on Web SOAP structure Services Description Language (WSDL) and Universal Description Discovery and Integration (UDDI). These services, especially UDDI, have proved to be of far less interest, but an appreciation of them gives a more complete understanding of the expected role of SOAP compared to how web services have actually evolved.
Specification The SOAP specification defines the messaging framework which consists of: • The SOAP processing model defining the rules for processing a SOAP message • The SOAP extensibility model defining the concepts of SOAP features and SOAP modules • The SOAP underlying protocol binding framework describing the rules for defining a binding to an underlying protocol that can be used for exchanging SOAP messages between SOAP nodes • The SOAP message construct defining the structure of a SOAP message
Processing model The SOAP processing model describes a distributed processing model, its participants, the SOAP nodes, and how a SOAP receiver processes a SOAP message. The following SOAP nodes are defined: SOAP sender A SOAP node that transmits a SOAP message. SOAP receiver A SOAP node that accepts a SOAP message. SOAP message path The set of SOAP nodes through which a single SOAP message passes. Initial SOAP sender (Originator) The SOAP sender that originates a SOAP message at the starting point of a SOAP message path. SOAP intermediary A SOAP intermediary is both a SOAP receiver and a SOAP sender and is targetable from within a SOAP message. It processes the SOAP header blocks targeted at it and acts to forward a SOAP message towards an ultimate SOAP receiver.
SOAP
132
Ultimate SOAP receiver The SOAP receiver that is a final destination of a SOAP message. It is responsible for processing the contents of the SOAP body and any SOAP header blocks targeted at it. In some circumstances, a SOAP message might not reach an ultimate SOAP receiver, for example because of a problem at a SOAP intermediary. An ultimate SOAP receiver cannot also be a SOAP intermediary for the same SOAP message.
Transport methods Both SMTP and HTTP are valid application layer protocols used as Transport for SOAP, but HTTP has gained wider acceptance as it works well with today's Internet infrastructure; specifically, HTTP works well with network firewalls. SOAP may also be used over HTTPS (which is the same protocol as HTTP at the application level, but uses an encrypted transport protocol underneath) with either simple or mutual authentication; this is the advocated WS-I method to provide web service security as stated in the WS-I Basic Profile 1.1. This is a major advantage over other distributed protocols like GIOP/IIOP or DCOM which are normally filtered by firewalls. SOAP over AMQP is yet another possibility that some implementations support.[5] SOAP also has the advantage over DCOM that it is unaffected by security rights being configured on the machines which require knowledge of both transmitting and receiving nodes. This allows SOAP to be loosely coupled in a way that is not possible with DCOM. There is also the SOAP-over-UDP OASIS standard.
Message format XML Information Set was chosen as the standard message format because of its widespread use by major corporations and open source development efforts. Typically, XML Information Set is serialized as XML. A wide variety of freely available tools significantly eases the transition to a SOAP-based implementation. The somewhat lengthy syntax of XML can be both a benefit and a drawback. While it promotes readability for humans, facilitates error detection, and avoids interoperability problems such as byte-order (Endianness), it can slow processing speed and can be cumbersome. For example, CORBA, GIOP, ICE, and DCOM use much shorter, binary message formats. On the other hand, hardware appliances are available to accelerate processing of XML messages.[6][7] Binary XML is also being explored as a means for streamlining the throughput requirements of XML. XML messages by their self documenting nature usually have more 'overhead' (Headers, footers, nested tags, delimiters) than actual data in contrast to earlier protocols where the overhead was usually a relatively small percentage of the overall message . In financial messaging SOAP was found to result in a 2-4 times larger message than previous protocols (FIX (Financial Information Exchange) and CDR Common Data Representation) [8] It is important to note that XML Information Set does not require to be serialized in XML. For instance, a CSV or JSON XML-infoset representation exists. There is also no need to specify a generic transformation framework. The concept of SOAP bindings allows for specific bindings for a specific application. The drawback is that both the senders and receivers have to support this newly defined binding.
Example message POST /InStock HTTP/1.1 Host: www.example.org Content-Type: application/soap+xml; charset=utf-8 Content-Length: 299 SOAPAction: "http://www.w3.org/2003/05/soap-envelope"
SOAP IBM
Technical critique Advantages • SOAP is versatile enough to allow for the use of different transport protocols. The standard stacks use HTTP as a transport protocol, but other protocols such as JMS[9] and SMTP[10] are also usable. • Since the SOAP model tunnels fine in the HTTP post/response model, it can tunnel easily over existing firewalls and proxies, without modifications to the SOAP protocol, and can use the existing infrastructure.
Disadvantages • When using standard implementations and the default SOAP/HTTP binding, the XML infoset is serialized as XML. Because of the verbose XML format, SOAP can be considerably slower than competing middleware technologies such as CORBA or ICE. This may not be an issue when only small messages are sent.[11] To improve performance for the special case of XML with embedded binary objects, the Message Transmission Optimization Mechanism was introduced. • When relying on HTTP as a transport protocol and not using WS-Addressing or an ESB, the roles of the interacting parties are fixed. Only one party (the client) can use the services of the other. Developers must use polling instead of notification in these common cases.
References [2] http:/ / www. w3. org/ TR/ soap/ [3] http:/ / www. w3. org/ 2000/ xp/ Group/ [5] (http:/ / www. amqp. org/ confluence/ display/ AMQP/ 1-0+ SOAP+ Mapping)
• Benoît Marchal, "Soapbox: Why I'm using SOAP", IBM, (http://www.ibm.com/developerworks/xml/library/ x-soapbx1.html) • Uche Ogbuji, " Tutorial: XML messaging with SOAP (http://www.ibm.com/developerworks/edu/ x-dw-cosoap-i.html)", Principal Consultant, Fourthought, Inc.
External links • W3C SOAP page (http://www.w3.org/TR/soap/) • SOAP Version 1.2 specification (http://www.w3.org/TR/soap12/) • Create SOAP Message in Java (http://shivasoft.in/blog/java/create-soap-message-using-java/)
133
Project management
134
Project management Project management is the discipline of planning, organizing, motivating, and controlling resources to achieve specific goals. A project is a temporary endeavor with a defined beginning and end (usually time-constrained, and often constrained by funding or deliverables),[] undertaken to meet unique goals and objectives,[1] typically to bring about beneficial change or added value. The temporary nature of projects stands in contrast with business as usual (or operations),[2] which are repetitive, permanent, or semi-permanent functional activities to produce products or services. In practice, the management of these two systems is often quite different, and as such requires the development of distinct technical skills and management strategies. The primary challenge of project management is to achieve all of the project goals[3] and objectives while honoring the preconceived constraints.[4] The primary constraints are scope, time, quality and budget.[5] The secondary —and more ambitious— challenge is to optimize the allocation of necessary inputs and integrate them to meet pre-defined objectives.
History Until 1900 civil engineering projects were generally managed by creative architects, engineers, and master builders themselves, for example Vitruvius (first century BC), Christopher Wren (1632–1723), Thomas Telford (1757–1834) and Isambard Kingdom Brunel (1806–1859).[6] It was in the 1950s that organizations started to systematically apply project management tools and techniques to complex engineering projects.[7]
Roman soldiers building a fortress, Trajan's Column 113 AD
As a discipline, project management developed from several fields of application including civil construction, engineering, and heavy defense activity.[8] Two forefathers of project management are Henry Gantt, called the father of planning and control techniques,[9] who is famous for his use of the Gantt chart as a project management tool (alternatively Harmonogram first proposed by Karol Adamiecki[10]); and Henri Fayol for his creation of the five management functions that form the foundation of the body of knowledge associated with project and program management.[11] Both Gantt and Fayol were students of Frederick Winslow Taylor's theories of scientific management. His work is the forerunner to modern project management tools including work breakdown structure (WBS) and resource allocation. The 1950s marked the beginning of the modern project management era where core engineering fields come together to work as one. Project
Henry Gantt (1861–1919), the father of planning and control techniques
Project management
135
management became recognized as a distinct discipline arising from the management discipline with engineering model.[12] In the United States, prior to the 1950s, projects were managed on an ad-hoc basis, using mostly Gantt charts and informal techniques and tools. At that time, two mathematical project-scheduling models were developed. The "Critical Path Method" (CPM) was developed as a joint venture between DuPont Corporation and Remington Rand Corporation for managing plant maintenance projects. And the "Program Evaluation and Review Technique" or PERT, was developed by Booz Allen Hamilton as part of the United States Navy's (in conjunction with the Lockheed Corporation) Polaris missile submarine program;[13] These mathematical techniques quickly spread into many private enterprises. At the same time, as project-scheduling models were being developed, technology for project cost estimating, cost management, and engineering economics was evolving, with pioneering work by Hans Lang and others. In 1956, the American Association of Cost Engineers (now AACE International; the Association for the Advancement of Cost Engineering) was formed by early practitioners of project management and the associated specialties of planning and scheduling, cost estimating, and cost/schedule control (project control). AACE continued its pioneering work and in 2006 released the first integrated process for portfolio, program and project management (Total Cost Management Framework).
PERT network chart for a seven-month project with five milestones
The International Project Management Association (IPMA) was founded in Europe in 1967,[14] as a federation of several national project management associations. IPMA maintains its federal structure today and now includes member associations on every continent except Antarctica. IPMA offers a Four Level Certification program based on the IPMA Competence Baseline (ICB).[15] The ICB covers technical, contextual, and behavioral competencies. In 1969, the Project Management Institute (PMI) was formed in the USA.[16] PMI publishes A Guide to the Project Management Body of Knowledge (PMBOK Guide), which describes project management practices that are common to "most projects, most of the time." PMI also offers multiple certifications.
Approaches There are a number of approaches to managing project activities including lean, iterative, incremental, and phased approaches. Regardless of the methodology employed, careful consideration must be given to the overall project objectives, timeline, and cost, as well as the roles and responsibilities of all participants and stakeholders.
The traditional approach A traditional phased approach identifies a sequence of steps to be completed. In the "traditional approach",[17] five developmental components of a project can be distinguished (four stages plus control):
Project management
136
1. initiation 2. planning and design 3. execution and construction 4. monitoring and controlling systems 5. completion Not all projects will have every stage, as projects can be terminated before Typical development phases of an engineering project they reach completion. Some projects do not follow a structured planning and/or monitoring process. And some projects will go through steps 2, 3 and 4 multiple times. Many industries use variations of these project stages. For example, when working on a brick-and-mortar design and construction, projects will typically progress through stages like pre-planning, conceptual design, schematic design, design development, construction drawings (or contract documents), and construction administration. In software development, this approach is often known as the waterfall model,[18] i.e., one series of tasks after another in linear sequence. In software development many organizations have adapted the Rational Unified Process (RUP) to fit this methodology, although RUP does not require or explicitly recommend this practice. Waterfall development works well for small, well defined projects, but often fails in larger projects of undefined and ambiguous nature. The Cone of Uncertainty explains some of this as the planning made on the initial phase of the project suffers from a high degree of uncertainty. This becomes especially true as software development is often the realization of a new or novel product. In projects where requirements have not been finalized and can change, requirements management is used to develop an accurate and complete definition of the behavior of software that can serve as the basis for software development.[] While the terms may differ from industry to industry, the actual stages typically follow common steps to problem solving—"defining the problem, weighing options, choosing a path, implementation and evaluation."
PRINCE2 PRINCE2 is a structured approach to project management, released in 1996 as a generic project management method.[19] It combined the original PROMPT methodology (which evolved into the PRINCE methodology) with IBM's MITP (managing the implementation of the total project) methodology. PRINCE2 provides a method for managing projects within a clearly defined framework. PRINCE2 describes procedures to coordinate people and activities in a project, how to design and supervise the project, and what to do if the project has to be adjusted if it does not develop as planned.
The PRINCE2 process model
In the method, each process is specified with its key inputs and outputs and with specific goals and activities to be carried out. This allows for automatic control of any deviations from the plan. Divided into manageable stages, the method enables an efficient control of resources. On the basis of close monitoring, the project can be carried out in a controlled and organized way.
Project management PRINCE2 provides a common language for all participants in the project. The various management roles and responsibilities involved in a project are fully described and are adaptable to suit the complexity of the project and skills of the organization.
PRiSM (Projects integrating Sustainable Methods) PRiSM[20] is a process-based, structured project management methodology that introduces areas of sustainability and integrates them into four core project phases in order to maximize opportunities to improve sustainability and the use of finite resources. The methodology encompasses the management, control and organization of a project with consideration and emphasis beyond the project life-cycle and on the five aspects of sustainability, People, Planet, Profit, Process and Product. It derives the framework from ISO:21500 as well as ISO 14001, ISO 26000, and ISO 9001 PRiSM is also used to refer to the training and accreditation of authorized practitioners of the methodology who must undertake accredited qualifications based on competency to obtain the GPM certification.[21]
Critical chain project management Critical chain project management (CCPM) is a method of planning and managing project execution designed to deal with uncertainties inherent in managing projects, while taking into consideration limited availability of resources (physical, human skills, as well as management & support capacity) needed to execute projects. CCPM is an application of the Theory of Constraints (TOC) to projects. The goal is to increase the flow of projects in an organization (throughput). Applying the first three of the five focusing steps of The PRiSM Flowchart TOC, the system constraint for all projects is identified as are the resources. To exploit the constraint, tasks on the critical chain are given priority over all other activities. Finally, projects are planned and managed to ensure that the resources are ready when the critical chain tasks must start, subordinating all other resources to the critical chain. The project plan should typically undergo resource leveling, and the longest sequence of resource-constrained tasks should be identified as the critical chain. In some cases, such as managing contracted sub-projects, it is advisable to use a simplified approach without resource leveling. In multi-project environments, resource leveling should be performed across projects. However, it is often enough to identify (or simply select) a single "drum". The drum can be a resource that acts as a constraint across projects, which are staggered based on the availability of that single resource. One can also use a "virtual drum" by selecting a task or group of tasks (typically integration points) and limiting the number of projects in execution at that stage.
137
Project management
138
Event chain methodology Event chain methodology is another method that complements critical path method and critical chain project management methodologies. Event chain methodology is an uncertainty modeling and schedule network analysis technique that is focused on identifying and managing events and event chains that affect project schedules. Event chain methodology helps to mitigate the negative impact of psychological heuristics and biases, as well as to allow for easy modeling of uncertainties in the project schedules. Event chain methodology is based on the following principles. • Probabilistic moment of risk: An activity (task) in most real-life processes is not a continuous uniform process. Tasks are affected by external events, which can occur at some point in the middle of the task. • Event chains: Events can cause other events, which will create event chains. These event chains can significantly affect the course of the project. Quantitative analysis is used to determine a cumulative effect of these event chains on the project schedule. • Critical events or event chains: The single events or the event chains that have the most potential to affect the projects are the “critical events” or “critical chains of events.” They can be determined by the analysis. • Project tracking with events: Even if a project is partially completed and data about the project duration, cost, and events occurred is available, it is still possible to refine information about future potential events and helps to forecast future project performance. • Event chain visualization: Events and event chains can be visualized using event chain diagrams on a Gantt chart.
Process-based management Also furthering the concept of project control is the incorporation of process-based management. This area has been driven by the use of Maturity models such as the CMMI (capability maturity model integration; see this example of a predecessor) and ISO/IEC15504 (SPICE – software process improvement and capability estimation).
Agile project management Agile project management approaches based on the principles of human interaction management are founded on a process view of human collaboration. It is "most typically used in software, website, technology, creative and marketing industries."[22] This contrasts sharply with the traditional approach. In the agile software development or flexible product development approach, the project is seen as a series of relatively small tasks conceived and executed as the situation demands in an adaptive manner, rather than as a completely pre-planned process.
The iteration cycle in agile project management
Project management
139
Lean project management Lean project management uses principles from lean manufacturing to focus on delivering value with less waste.
Extreme project management In critical studies of project management it has been noted that several PERT based models are not well suited for the multi-project company environment of today.[citation needed] Most of them are aimed at very large-scale, one-time, non-routine projects, and currently all kinds of management are expressed in terms of projects. Using complex models for "projects" (or rather "tasks") spanning a few weeks has been proven to cause unnecessary costs and low maneuverability in several cases [citation needed]. Instead, project management experts try to identify different "lightweight" models, such as Extreme Programming and Scrum.
Planning and feedback loops in Extreme programming (XP) with the time frames of the multiple loops.
The generalization of Extreme Programming to other kinds of projects is extreme project management, which may be used in combination with the process modeling and management principles of human interaction management.
Benefits realisation management Benefits realization management (BRM) enhances normal project management techniques through a focus on agreeing what outcomes should change (the benefits) during the project, and then measuring to see if that is happening to help keep a project on track. This can help to reduce the risk of a completed project being a failure as instead of attempting to deliver agreed requirements the aim is to deliver the benefit of those requirements. An example of delivering a project to requirements could be agreeing on a project to deliver a computer system to process staff data with the requirement to manage payroll, holiday and staff personnel records. Under BRM the agreement would be to use the suppliers suggested staff data system to see an agreed reduction in staff hours processing and maintaining staff data (benefit reduce HR headcount).
Project management
140
Processes Traditionally, project management includes a number of elements: four to five process groups, and a control system. Regardless of the methodology or terminology used, the same basic project management processes will be used. Major process groups generally include:[5] • • • • •
initiation planning or design production or execution monitoring and controlling closing
In project environments with a significant exploratory element (e.g., research and development), these stages may be supplemented with decision points (go/no go decisions) at which the project's continuation is debated and decided. An example is the Phase–gate model.
[]
The project development stages
Initiating The initiating processes determine the nature and scope of the project.[23] If this stage is not performed well, it is unlikely that the project will be successful in meeting the business’ needs. The key project controls needed here are an understanding of the business environment and [] Initiating process group processes making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them. The initiating stage should include a plan that encompasses the following areas: • • • • •
analyzing the business needs/requirements in measurable goals reviewing of the current operations financial analysis of the costs and benefits including a budget stakeholder analysis, including users, and support personnel for the project project charter including costs, tasks, deliverables, and schedule
Project management
141
Planning and design After the initiation stage, the project is planned to an appropriate level of detail (see example of a flow-chart).[] The main purpose is to plan time, cost and resources adequately to estimate the work needed and to effectively manage risk during project execution. As with the Initiation process group, a failure to adequately plan greatly reduces the project's chances of successfully accomplishing its goals. Project planning generally consists of[24] • • • • • • • • • •
determining how to plan (e.g. by level of detail or rolling wave); developing the scope statement; selecting the planning team; identifying deliverables and creating the work breakdown structure; identifying the activities needed to complete those deliverables and networking the activities in their logical sequence; estimating the resource requirements for the activities; estimating time and cost for activities; developing the schedule; developing the budget; risk planning;
• gaining formal approval to begin work. Additional processes, such as planning for communications and for scope management, identifying roles and responsibilities, determining what to purchase for the project and holding a kick-off meeting are also generally advisable. For new product development projects, conceptual design of the operation of the final product may be performed concurrent with the project planning activities, and may help to inform the planning team when identifying deliverables and planning activities.
Executing Executing consists of the processes used to complete the work defined in the project plan to accomplish the project's requirements. Execution process involves coordinating people and resources, as well as integrating and performing the activities of the project in accordance with the project management plan. The deliverables are produced as outputs from the processes performed as defined in the project management plan and other frameworks that might be applicable to the type of project at hand. Execution process group include: • Direct and Manage Project execution • Quality Assurance of deliverables • Acquire, Develop and Manage Project team • Distribute Information
[] Executing process group processes
Project management
142
• Manage stakeholder expectations • Conduct Procurement
Monitoring and controlling Monitoring and controlling consists of those processes performed to observe project execution so that potential problems can be identified in a timely manner and corrective action can be taken, when necessary, to control the execution of the project. The key benefit is that project performance is observed and measured regularly to identify variances from the project management plan. Monitoring and controlling includes:[25]
[]
Monitoring and controlling process group processes
• Measuring the ongoing project activities ('where we are'); • Monitoring the project variables (cost, effort, scope, etc.) against the project management plan and the project performance baseline (where we should be); • Identify corrective actions to address issues and risks properly (How can we get on track again); • Influencing the factors that could circumvent integrated change control so only approved changes are implemented. In multi-phase projects, the monitoring and control process also provides feedback between project phases, in order to implement corrective or preventive actions to bring the project into compliance with the project management plan. Project maintenance is an ongoing process, and it includes:[5] • Continuing support of end-users • Correction of errors • Updates of the software over time In this stage, auditors should pay attention to how effectively and quickly user problems are resolved. Over the course of any construction project, the work scope may change. Change is a normal and expected part of the construction process. Changes can be the result of necessary design modifications, differing site conditions, material availability, contractor-requested changes, value engineering and impacts from third parties, to name a few. Beyond executing the change in the field, the change normally needs to be documented to show what was Monitoring and controlling cycle actually constructed. This is referred to as change management. Hence, the owner usually requires a final record to show all changes or, more specifically, any change that modifies the tangible portions of the finished work. The record is made on the contract documents – usually, but not necessarily limited to, the design drawings. The end product of this effort is what the industry terms as-built drawings, or more simply, “as built.” The requirement for providing them is a norm in construction contracts.
Project management
143
When changes are introduced to the project, the viability of the project has to be re-assessed. It is important not to lose sight of the initial goals and targets of the projects. When the changes accumulate, the forecasted result may not justify the original proposed investment in the project.
Closing Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned. This phase consists of:[5] • Project close: Finalize all activities across all of the process groups to formally close the project or a project phase
[]
Closing process group processes.
• Contract closure: Complete and settle each contract (including the resolution of any open items) and close each contract applicable to the project or project phase.
Project controlling and project control systems Project controlling should be established as an independent function in project management. It implements verification and controlling function during the processing of a project in order to reinforce the defined performance and formal goals.[26] The tasks of project controlling are also: • the creation of infrastructure for the supply of the right information and its update • the establishment of a way to communicate disparities of project parameters • the development of project information technology based on an intranet or the determination of a project key performance index system (KPI) • divergence analyses and generation of proposals for potential project regulations[27] • the establishment of methods to accomplish an appropriate the project structure, project workflow organization, project control and governance • creation of transparency among the project parameters[28] Fulfillment and implementation of these tasks can be achieved by applying specific methods and instruments of project controlling. The following methods of project controlling can be applied: • • • • • • • • • •
investment analysis cost–benefit analyses value benefit Analysis expert surveys simulation calculations risk-profile analyses surcharge calculations milestone trend analysis cost trend analysis target/actual-comparison[29]
Project control is that element of a project that keeps it on-track, on-time and within budget.[25] Project control begins early in the project with planning and ends late in the project with post-implementation review, having a thorough involvement of each step in the process. Each project should be assessed for the appropriate level of control needed: too much control is too time consuming, too little control is very risky. If project control is not implemented correctly, the cost to the business should be clarified in terms of errors, fixes, and additional audit fees.
Project management
144
Control systems are needed for cost, risk, quality, communication, time, change, procurement, and human resources. In addition, auditors should consider how important the projects are to the financial statements, how reliant the stakeholders are on controls, and how many controls exist. Auditors should review the development process and procedures for how they are implemented. The process of development and the quality of the final product may also be assessed if needed or requested. A business may want the auditing firm to be involved throughout the process to catch problems earlier on so that they can be fixed more easily. An auditor can serve as a controls consultant as part of the development team or as an independent auditor as part of an audit. Businesses sometimes use formal systems development processes. These help assure that systems are developed successfully. A formal process is more effective in creating strong controls, and auditors should review this process to confirm that it is well designed and is followed in practice. A good formal systems development plan outlines: • • • • •
A strategy to align development with the organization’s broader objectives Standards for new systems Project management policies for timing and budgeting Procedures describing the process Evaluation of quality of change
Topics Project managers A project manager is a professional in the field of project management. Project managers can have the responsibility of the planning, execution, and closing of any project, typically relating to construction industry, engineering, architecture, computing, and telecommunications. Many other fields in production engineering and design engineering and heavy industrial have project managers. A project manager is the person accountable for accomplishing the stated project objectives. Key project management responsibilities include creating clear and attainable project objectives, building the project requirements, and managing the triple constraint for projects, which is cost, time, and scope. A project manager is often a client representative and has to determine and implement the exact needs of the client, based on knowledge of the firm they are representing. The ability to adapt to the various internal procedures of the contracting party, and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized.
Project management triangle Like any human undertaking, projects need to be performed and delivered under certain constraints. Traditionally, these constraints have been listed as "scope," "time," and "cost".[] These are also referred to as the "project management triangle", where each side represents a constraint. One side of the triangle cannot be changed without affecting the others. A further refinement of the constraints separates product "quality" or "performance" from scope, and turns quality into a fourth constraint. The project management triangle
Project management The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project's end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope. The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.
Work breakdown structure The work breakdown structure (WBS) is a tree structure that shows a subdivision of effort required to achieve an objective—for example a program, project, and contract. The WBS may be hardware-, product-, service-, or process-oriented (see an example in a NASA reporting structure (2001)).[30] A WBS can be developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks, sub-tasks, and work packages), which include all steps necessary to achieve the objective.[] The work breakdown structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established.[30]
Project management framework The Program (Investment) life cycle integrates the project management and system development life cycles with the activities directly associated with system deployment and operation. By design, system operation management and related activities occur after the project is complete and are not documented within this guide[] (see an example of an IT project management framework). For example, see figure, in the US United States Department of Veterans Affairs (VA) the program management life cycle is depicted and describe in the overall VA IT Project Management Framework to address the integration of OMB Exhibit 300 project (investment) management activities and the overall project budgeting process. The VA IT Project Management Framework diagram illustrates Milestone 4 which occurs following the deployment of a system and the closing of the project. The project closing phase activities at the VA continues through system deployment and into system operation for the purpose of illustrating and describing the system activities the VA considers part of the project. The figure illustrates the actions and associated artifacts of the VA IT Project and Program Management process.[]
International standards There have been several attempts to develop project management standards, such as: • Capability Maturity Model from the Software Engineering Institute. • GAPPS, Global Alliance for Project Performance Standards – an open source standard describing COMPETENCIES for project and program managers. • A Guide to the Project Management Body of Knowledge from the Project Management Institute (PMI) • HERMES method, Swiss general project management method, selected for use in Luxembourg and international organizations. • The ISO standards ISO 9000, a family of standards for quality management systems, and the ISO 10006:2003, for Quality management systems and guidelines for quality management in projects. • PRINCE2, PRojects IN Controlled Environments. • Association for Project Management Body of Knowledge[31]
145
Project management • Team Software Process (TSP) from the Software Engineering Institute. • Total Cost Management Framework, AACE International's Methodology for Integrated Portfolio, Program and Project Management. • V-Model, an original systems development method. • The Logical framework approach, which is popular in international development organizations. • IAPPM, The International Association of Project & Program Management, guide to project auditing and rescuing troubled projects.
Project portfolio management An increasing number of organizations are using, what is referred to as, project portfolio management (PPM) as a means of selecting the right projects and then using project management techniques[32] as the means for delivering the outcomes in the form of benefits to the performing private or not-for-profit organization.
References [1] *The Definitive Guide to Project Management. Nokes, Sebastian. 2nd Ed.n. London (Financial Times / Prentice Hall): 2007. ISBN 978-0-273-71097-4 [2] Paul C. Dinsmore et al (2005) The right projects done right! John Wiley and Sons, 2005. ISBN 0-7879-7113-8. p.35 and further. [3] [4] [5] [6] [7]
Lewis R. Ireland (2006) Project Management. McGraw-Hill Professional, 2006. ISBN 0-07-147160-X. p.110. Joseph Phillips (2003). PMP Project Management Professional Study Guide. McGraw-Hill Professional, 2003. ISBN 0-07-223062-2 p.354. PMI (2010). A Guide to the Project Management Body of Knowledge p.27-35 Dennis Lock (2007) Project Management (9th ed.) Gower Publishing, Ltd., 2007. ISBN 0-566-08772-3 Young-Hoon Kwak (2005). "A brief History of Project Management". In: The story of managing projects. Elias G. Carayannis et al. (9 eds), Greenwood Publishing Group, 2005. ISBN 1-56720-506-2 [8] David I. Cleland, Roland Gareis (2006). Global Project Management Handbook. "Chapter 1: "The evolution of project management". McGraw-Hill Professional, 2006. ISBN 0-07-146045-4 [9] Martin Stevens (2002). Project Management Pathways. Association for Project Management. APM Publishing Limited, 2002 ISBN 1-903494-01-X p.xxii [10] Edward R. Marsh (1975). "The Harmonogram of Karol Adamiecki". In: The Academy of Management Journal. Vol. 18, No. 2 (Jun., 1975), p. 358. ( online (http:/ / www. jstor. org/ pss/ 255537)) [11] Morgen Witzel (2003). Fifty key figures in management. Routledge, 2003. ISBN 0-415-36977-0. p. 96-101. [12] David I. Cleland, Roland Gareis (2006). Global Project Management Handbook. McGraw-Hill Professional, 2006. ISBN 0-07-146045-4. p.1-4 states: "It was in the 1950s when project management was formally recognized as a distinct contribution arising from the management discipline." [13] Booz Allen Hamilton – History of Booz Allen 1950s (http:/ / www. boozallen. com/ about/ history/ history_5) [14] Bjarne Kousholt (2007). Project Management –. Theory and practice.. Nyt Teknisk Forlag. ISBN 87-571-2603-8. p.59. [15] ipma.ch (http:/ / www. ipma. ch/ publication/ Pages/ ICB-IPMACompetenceBaseline. aspx) [16] F. L. Harrison, Dennis Lock (2004). Advanced project management: a structured approach. Gower Publishing, Ltd., 2004. ISBN 0-566-07822-8. p.34. [18] Winston W. Royce (1970). "Managing the Development of Large Software Systems" (http:/ / www. cs. umd. edu/ class/ spring2003/ cmsc838p/ Process/ waterfall. pdf) in: Technical Papers of Western Electronic Show and Convention (WesCon) August 25–28, 1970, Los Angeles, USA. [19] OGC – PRINCE2 – Background (http:/ / webarchive. nationalarchives. gov. uk/ 20110822131357/ http:/ / www. ogc. gov. uk/ methods_prince_2__background. asp) [20] http:/ / greenprojectmanagement. org [23] Peter Nathan, Gerald Everett Jones (2003). PMP certification for dummies. p.63. [25] James P. Lewis (2000). The project manager's desk reference: : a comprehensive guide to project planning, scheduling, evaluation, and systems. p.185 [26] Jörg Becker, Martin Kugeler, Michael Rosemann (2003). Process management: a guide for the design of business processes. ISBN 978-3-540-43499-3. p.27. [27] Bernhard Schlagheck (2000). Objektorientierte Referenzmodelle für das Prozess- und Projektcontrolling. Grundlagen – Konstruktionen – Anwendungsmöglichkeiten. ISBN 978-3-8244-7162-1. p.131. [28] Josef E. Riedl (1990). Projekt – Controlling in Forschung und Entwicklung. ISBN 978-3-540-51963-8. p.99. [29] Steinle, Bruch, Lawa (1995). Projektmanagement. FAZ Verlagsbereich Wirtschaftsbücher. p.136–143 [30] NASA NPR 9501.2D (http:/ / nodis3. gsfc. nasa. gov/ displayDir. cfm?Internal_ID=N_PR_9501_002D_& page_name=Chp2& format=PDF). May 23, 2001.
146
Project management [31] Body of Knowledge 5th edition, Association for Project Management, 2006, ISBN 1-903494-13-3 [32] Albert Hamilton (2004). Handbook of Project Management Procedures. TTL Publishing, Ltd. ISBN 0-7277-3258-7
External links • Guidelines for Managing Projects (http://www.berr.gov.uk/files/file40647.pdf) from the UK Department for Business, Enterprise and Regulatory Reform (BERR) • Max Wideman's "Open Source" Comparative Glossary of Project Management Terms (http://www. maxwideman.com/) • Open Source Project Management manual (http://www.projectmanagement-training.net/book/) • What is Project Management? (http://project-management.com/what-is-project-management/) from project-management.com
Project planning Project planning is part of project management, which relates to the use of schedules such as Gantt charts to plan and subsequently report progress within the project environment.[1] Initially, the project scope is defined and the appropriate methods for completing the project are determined. Following this step, the durations for the various tasks necessary to complete the work are listed and grouped into a work breakdown structure. The logical dependencies between tasks are defined using an activity network diagram that enables identification of the critical path. Float or slack time in the schedule can be calculated using project management software.[2] Then the necessary resources can be estimated and costs for each activity can be allocated to each resource, giving the total project cost. At this stage, the project schedule may be optimized to achieve the appropriate balance between resource usage and project duration to comply with the project objectives. Once established and agreed, the project schedule becomes what is known as the baseline schedule. Progress will be measured against the baseline schedule throughout the life of the project. Analyzing progress compared to the baseline schedule is known as earned value management.[3] The inputs of the project planning phase include the project charter and the concept proposal. The outputs of the project planning phase include the project requirements, the project schedule, and the project management plan.[4]
References [4] Filicetti, John, Project Planning Overview (http:/ / www. pmhut. com/ project-management-process-phase-2-planning-overview), PM Hut (Last accessed 8 November 2009).
External links • International Project Management Association (http://www.ipma.ch/) • Association for Project Managers (UK) (http://www.apm.org.uk/) • Prince2 site from OGC (UK Office of Government Commerce) (http://www.ogc.gov.uk/methods_prince_2. asp) • Critical path web calculator (http://sporkforge.com/sched/critical_path.php)
147
Aggregate project plan
Aggregate project plan An aggregate project plan (APP) is the process of creating development goals and objectives and using these goals and objectives to improve productivity as well as development capabilities. The purpose of this process is generally to ensure that each project will accomplish its development goals and objectives. Projects can be differentiated into five types of projects: breakthrough, platform, derivative, R&D, or partnered projects (such as projects performed with partners or allianced firms). This differentiation determines a project's development goals and objectives as well as resources allocated to that project.[1] An aggregate project plan provides management with a categorized list of projects, which balances short and long term goals. This list assists management in making difficult decisions such as when to start projects and which projects should be cannibalized. Starting projects in a sequential manner according to the firm's strategy as well as resources available will allow fewer projects to continue simultaneously and improve productivity. Another benefit is the creation of an organizational form for each project type. This creates a focus on the generation of competence and builds the speed and productivity of individuals as well as the organization itself.[2]
Aggregate project planning process This is the process a firm undergoes to create an aggregate project plan: 1. Create a well-defined and easily understood strategy. 2. Relay the strategy with the aim of developing new products, processes, or services and improving efficiency of current projects. 3. Establish clear definitions of each type of project: breakthrough, platform, derivative, R&D, or partnered projects. 4. List current projects and classify each by project type. 5. Eliminate projects that don’t fit within a project type. 6. Estimate the average time and resources needed for each project type based on past experience. 7. Determine the desired mix of projects. 8. Identify existing resources and estimate the number of projects those resources can support. 9. Decide which projects to pursue and eliminate the rest. 10. Allocate resources to remaining projects and work to improve development capabilities. [3][4]
Five categories of development projects Derivative projects Derivative projects can vary from additions or augmentations to existing products or simple price reductions over time. Improvement work on derivative projects falls into three categories, incremental product changes, incremental process changes, and incremental changes in both sectors. Due to these minor changes, derivative projects require trivial effort & resources on both development and management. Examples: special edition car paint and iPod hard drive size updates Many organizations make derivative products, which can range from special editions of existing cars, adding nothing more than a special paint scheme and interior or a new iPod with a large hard drive.
148
Aggregate project plan
Platform projects These projects are the next generation of products for the company. These are major changes from existing products/services or the way the product/service is made or delivered. These create a new ‘platform‘ for growth in the future. These projects offer significant improvements in cost, quality, and performance. Platforms are created to meet the needs of a core group of customers changing more than one aspect of a product or service; while derivatives normally only change one aspect of a product or service. Examples: new car models. Microprocessor New car models have major changes in a number of areas which include manufacturing and product changes when they are released creating a platform for the car company. A new microprocessor with changes in speed, size, and capabilities, while having a new process to create them, adds a new platform for the company.
Breakthrough Projects These projects are the highest risk and the highest reward category. These involve the newest technology greater than that of a platform project. This use of this technology may be ‘disruptive’ to the rest of the industry and create an entirely new product category to define the industry. This may be a brand new technology or significant changes to existing projects. These types of projects often incorporate new and innovative manufacturing or servicing processes. These types of projects should be given leeway to work outside of normal and existing operating techniques. Examples: fiber optic data transfer, hybrid cars Fiber optic data transfer cables revolutionized the industry of data transfer. This new technology was a breakthrough in the industry with lines of dark fiber laid though many major cities. Hybrid cars are the first types of cars not to rely solely on the use of fossil fuels.
Research & Development The invention of knowledge of new materials and technologies that will be used in commercial development. R&D projects are high risk endeavors with a possibility of high returns. Research & development is important since it will always occur before product and process development. R&D projects also use the same resources as commercial development and will always compete for them. Every organization has different expectations for R&D projects due to its high possibility of failure. Example: 3D Television An example of a R&D project would be a Television company attempting to develop a new, 3D viewing system for consumers. This would require extensive research & development that would have a high up-front cost and possible return with a large risk of failure.
Partnered projects These projects can fall under any of the other four categories. However these projects can often be overlooked while mapping the aggregate project plan. These plans are important and their resource usage should be included in planning. Example: Pepsi and Starbucks Starbucks has an agreement that Pepsi will bottle their beverages they sell in retail sources. This is a partnership between both companies that neither can ignore and the resources should be accounted for.[5][6]
149
Aggregate project plan
150
References
Activity diagram
UML 1.x Activity diagram for a guided brainstorming process
UML diagrams Structural UML diagrams •
Class diagram
•
Component diagram
•
Composite structure diagram
•
Deployment diagram
•
Object diagram
•
Package diagram
•
Profile diagram Behavioral UML diagrams
•
Activity diagram
•
Communication diagram
•
Interaction overview diagram
•
Sequence diagram
•
State diagram
•
Timing diagram
•
Use case diagram
Activity diagram Activity diagrams are graphical representations of workflows of stepwise activities and actions[1] with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control. Activity diagrams are constructed from a limited number of shapes, connected with arrows. The most important shape types: • • • • •
rounded rectangles represent actions; diamonds represent decisions; bars represent the start (split) or end (join) of concurrent activities; a black circle represents the start (initial state) of the workflow; an encircled black circle represents the end (final state).
Arrows run from the start towards the end and represent the order in which activities happen. Hence they can be regarded as a form of flowchart. Typical flowchart techniques lack constructs for expressing concurrency. However, the join and split symbols in activity diagrams only resolve this for simple cases; the meaning of the model is not clear when they are arbitrarily combined with decisions or loops. While in UML 1.x, activity diagrams were a specialized form of state diagrams, in UML 2.x, the activity diagrams were reformalized to be based on Petri net-like semantics, increasing the scope of situations that can be modeled using activity diagrams. These changes cause many UML 1.x activity diagrams to be interpreted differently in UML 2.x
References [1] Glossary of Key Terms (http:/ / highered. mcgraw-hill. com/ sites/ 0077110005/ student_view0/ glossary. html) at McGraw-hill.com. Retrieved 20 July 2008.
External links • Articles on UML 2 Activities and Actions (http://conradbock.org/bockonline.html#UML2.0)
151
Critical path method
Critical path method The critical path method (CPM) is an algorithm for scheduling a set of project activities.[1] It is an important tool for effective project management.
History The critical path method (CPM) is a project modeling technique developed in the late 1950s by Morgan R. Walker of DuPont and PERT chart for a project with five milestones (10 James E. Kelley, Jr. of Remington Rand.[2] Kelley and Walker related through 50) and six activities (A through F). The [3] project has two critical paths: activities B and C, their memories of the development of CPM in 1989. Kelley or A, D, and F – giving a minimum project time attributed the term "critical path" to the developers of the Program of 7 months with fast tracking. Activity E is Evaluation and Review Technique which was developed at about the sub-critical, and has a float of 1 month. same time by Booz Allen Hamilton and the U.S. Navy.[4] The precursors of what came to be known as Critical Path were developed and put into practice by DuPont between 1940 and 1943 and contributed to the success of the Manhattan Project.[5] CPM is commonly used with all forms of projects, including construction, aerospace and defense, software development, research projects, product development, engineering, and plant maintenance, among others. Any project with interdependent activities can apply this method of mathematical analysis. Although the original CPM program and approach is no longer used, the term is generally applied to any approach used to analyze a project network logic diagram.
Basic technique The essential technique for using CPM [6] [7] is to construct a model of the project that includes the following: 1. A list of all activities required to complete the project (typically categorized within a work breakdown structure), 2. The time (duration) that each activity will take to completion, and 3. The dependencies between the activities. Using these values, CPM calculates the longest path of planned activities to the end of the project, and the earliest and latest that each activity can start and finish without making the project longer. This process determines which activities are "critical" (i.e., on the longest path) and which have "total float" (i.e., can be delayed without making the project longer). In project management, a critical path is the sequence of project network activities which add up to the longest overall duration. This determines the shortest time possible to complete the project. Any delay of an activity on the critical path directly impacts the planned project completion date (i.e. there is no float on the critical path). A project can have several, parallel, near critical paths. An additional parallel path through the network with the total durations shorter than the critical path is called a sub-critical or non-critical path. Although the activity-on-arrow diagram ("PERT Chart") is still used in a few places, it has generally been superseded by the activity-on-node diagram, where each activity is shown as a box or node and the arrows represent the logical relationships going from predecessor to successor as shown here in the "Activity-on-node diagram".
152
Critical path method
In this diagram, Activities A, B, C, D, and E comprise the critical or longest path, while Activities F, G, and H are off the critical path with floats of 10 days, 5 days, and 20 days respectively. Whereas activities that are off the critical path have float and are therefore not delaying completion of the project, those on the critical path will usually have critical path drag, i.e., they delay project completion. The drag of a critical path activity can be computed using the following formula: 1. If a critical path activity has nothing in parallel, its drag is equal to its duration. Thus A and E have drags of 10 days and 20 days Activity-on-node diagram showing critical path respectively. schedule, along with total float and critical path drag computations 2. If a critical path activity has another activity in parallel, its drag is equal to whichever is less: its duration or the total float of the parallel activity with the least total float. Thus since B and C are both parallel to F (float of 15) and H (float of 20), B has a duration of 20 and drag of 15 (equal to F's float), while C has a duration of only 5 days and thus drag of only 5. Activity D, with a duration of 10 days, is parallel to G (float of 5) and H (float of 20) and therefore its drag is equal to 5, the float of G. These results, including the drag computations, allow managers to prioritize activities for the effective management of project completion, and to shorten the planned critical path of a project by pruning critical path activities, by "fast tracking" (i.e., performing more activities in parallel), and/or by "crashing the critical path" (i.e., shortening the durations of critical path activities by adding resources).
Crash duration "Crash duration" is a term referring to the shortest possible time for which an activity can be scheduled.[8] It is achieved by shifting more resources towards the completion of that activity, resulting in decreased time spent and often a reduced quality of work, as the premium is set on speed.[9] Crash duration is typically modeled as a linear relationship between cost and activity duration, however in many cases a convex function or a step function is more applicable.[10]
Expansion Originally, the critical path method considered only logical dependencies between terminal elements. Since then, it has been expanded to allow for the inclusion of resources related to each activity, through processes called activity-based resource assignments and resource leveling. A resource-leveled schedule may include delays due to resource bottlenecks (i.e., unavailability of a resource at the required time), and may cause a previously shorter path to become the longest or most "resource critical" path. A related concept is called the critical chain, which attempts to protect activity and project durations from unforeseen delays due to resource constraints. Since project schedules change on a regular basis, CPM allows continuous monitoring of the schedule, allows the project manager to track the critical activities, and alerts the project manager to the possibility that non-critical activities may be delayed beyond their total float, thus creating a new critical path and delaying project completion. In addition, the method can easily incorporate the concepts of stochastic predictions, using the Program Evaluation and Review Technique (PERT) and event chain methodology. Currently, there are several software solutions available in industry that use the CPM method of scheduling, see list of project management software. The method currently used by most project management software is based on a manual calculation approach developed by Fondahl of Stanford University.
153
Critical path method
Flexibility A schedule generated using critical path techniques often is not realised precisely, as estimations are used to calculate times: if one mistake is made, the results of the analysis may change. This could cause an upset in the implementation of a project if the estimates are blindly believed, and if changes are not addressed promptly. However, the structure of critical path analysis is such that the variance from the original schedule caused by any change can be measured, and its impact either ameliorated or adjusted for. Indeed, an important element of project postmortem analysis is the As Built Critical Path (ABCP), which analyzes the specific causes and impacts of changes between the planned schedule and eventual schedule as actually implemented.
References [6] Samuel L. Baker, Ph.D. "Critical Path Method (CPM)" (http:/ / hspm. sph. sc. edu/ COURSES/ J716/ CPM/ CPM. html) University of South Carolina, Health Services Policy and Management Courses
Further reading • Project Management Institute (2008). A Guide To The Project Management Body Of Knowledge (4th ed.). Project Management Institute. ISBN 1-933890-51-7. • Klastorin, Ted (2003). Project Management: Tools and Trade-offs (3rd ed.). Wiley. ISBN 978-0-471-41384-4. • Heerkens, Gary (2001). Project Management (The Briefcase Book Series). McGraw–Hill. ISBN 0-07-137952-5. • Harold Kerzner (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling (8th ed.). ISBN 0-471-22577-0. • Lewis, James (2002). Fundamentals of Project Management (2nd ed.). American Management Association. ISBN 0-8144-7132-3. • Milosevic, Dragan Z. (2003). Project Management ToolBox: Tools and Techniques for the Practicing Project Manager. Wiley. ISBN 978-0-471-20822-8. • O'Brien, James J.; Plotnick, Fredric L. (2010). CPM in Construction Management, Seventh Edition. McGraw Hill. ISBN 978-0-07-163664-3. • Woolf, Murray B. (2012). CPM Mechanics: The Critical Path Method of Modeling Project Execution Strategy. ICS-Publications. ISBN 978-0-98-540910-6 Check |isbn= value (help). • Woolf, Murray B. (2007). Faster Construction Projects with CPM Scheduling. McGraw Hill. ISBN 978-0-07-148660-6. • Trauner, Manginelli, Lowe, Nagata, Furniss (2009). Construction Delays, 2nd Ed.: Understanding Them Clearly, Analyzing Them Correctly (http://www.amazon.com/Construction-Delays-Second-Edition-Understanding/dp/ 1856176770). Burlington, MA: Elsevier. p. 266. ISBN 978-1-85617-677-4.
154
Program Evaluation and Review Technique
155
Program Evaluation and Review Technique The Program (or Project) Evaluation and Review Technique, commonly abbreviated PERT, is a statistical tool, used in project management, that is designed to analyze and represent the tasks involved in completing a given project. First developed by the United States Navy in the 1950s, it is commonly used in conjunction with the critical path method (CPM).
History Program Evaluation and Review Technique
PERT network chart for a seven-month project with five milestones (10 through 50) and six activities (A through F).
The Navy's Special Projects Office, charged with developing the Polaris-Submarine weapon system and the Fleet Ballistic Missile capability, has developed a statistical technique for measuring and forecasting progress in research and development programs. This Program Evaluation and Review Technique (code-named PERT) is applied as a decision-making tool designed to save time in achieving end-objectives, and is of particular interest to those engaged in research and development programs for which time is a critical factor. The new technique takes recognition of three factors that influence successful achievement of research and development program objectives: time, resources, and technical performance specifications. PERT employs time as the variable that reflects planned resource-applications and performance specifications. With units of time as a common denominator, PERT quantifies knowledge about the uncertainties involved in developmental programs requiring effort at the edge of, or beyond, current knowledge of the subject - effort for which little or no previous experience exists. Through an electronic computer, the PERT technique processes data representing the major, finite accomplishments (events) essential to achieve end-objectives; the inter-dependence of those events; and estimates of time and range of time necessary to complete each activity between two successive events. Such time expectations include estimates of "most likely time", "optimistic time", and "pessimistic time" for each activity. The technique is a management control tool that sizes up the outlook for meeting objectives on time; highlights danger signals requiring management decisions; reveals and defines both criticalness and slack in the flow plan or the network of sequential activities that must be performed to meet objectives; compares current expectations with scheduled completion dates and computes the probability for meeting scheduled dates; and simulates the effects of options for decision - before decision. The concept of PERT was developed by an operations research team staffed with representatives from the Operations Research Department of Booz, Allen and Hamilton; the Evaluation Office of the Lockheed Missile Systems Division; and the Program Evaluation Branch, Special Projects Office, of the Department of the Navy. — Willard Fazar (Head, Program Evaluation Branch, Special Projects Office, U. S. Navy), The American Statistician, April 1959.[1]
Program Evaluation and Review Technique
Overview PERT is a method to analyze the involved tasks in completing a given project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. PERT was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed for the U.S. Navy Special Projects Office in 1957 to support the U.S. Navy's Polaris nuclear submarine project.[2] It was able to incorporate uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more of an event-oriented technique rather than start- and completion-oriented, and is used more in projects where time is the major factor rather than cost. It is applied to very large-scale, one-time, complex, non-routine infrastructure and Research and Development projects. An example of this was for the 1968 Winter Olympics in Grenoble which applied PERT from 1965 until the opening of the 1968 Games.[3] This project model was the first of its kind, a revival for scientific management, founded by Frederick Taylor (Taylorism) and later refined by Henry Ford (Fordism). DuPont's critical path method was invented at roughly the same time as PERT.
Conventions • A PERT chart is a tool that facilitates decision making. The first draft of a PERT chart will number its events sequentially in 10s (10, 20, 30, etc.) to allow the later insertion of additional events. • Two consecutive events in a PERT chart are linked by activities, which are conventionally represented as arrows (see the diagram above). • The events are presented in a logical sequence and no activity can commence until its immediately preceding event is completed. • The planner decides which milestones should be PERT events and also decides their “proper” sequence. • A PERT chart may have multiple pages with many sub-tasks. PERT is valuable to manage where multiple tasks are occurring simultaneously to reduce redundancy
Terminology • PERT event: a point that marks the start or completion of one or more activities. It consumes no time and uses no resources. When it marks the completion of one or more tasks, it is not “reached” (does not occur) until all of the activities leading to that event have been completed. • predecessor event: an event that immediately precedes some other event without any other events intervening. An event can have multiple predecessor events and can be the predecessor of multiple events. • successor event: an event that immediately follows some other event without any other intervening events. An event can have multiple successor events and can be the successor of multiple events. • PERT activity: the actual performance of a task which consumes time and requires resources (such as labor, materials, space, machinery). It can be understood as representing the time, effort, and resources required to move from one event to another. A PERT activity cannot be performed until the predecessor event has occurred. • optimistic time (O): the minimum possible time required to accomplish a task, assuming everything proceeds better than is normally expected • pessimistic time (P): the maximum possible time required to accomplish a task, assuming everything goes wrong (but excluding major catastrophes). • most likely time (M): the best estimate of the time required to accomplish a task, assuming everything proceeds as normal. • expected time (TE): the best estimate of the time required to accomplish a task, accounting for the fact that things don't always proceed as normal (the implication being that the expected time is the average time the task would
156
Program Evaluation and Review Technique
157
require if the task were repeated on a number of occasions over an extended period of time). TE = (O + 4M + P) ÷ 6 • float or slack is a measure of the excess time and resources available to complete a task. It is the amount of time that a project task can be delayed without causing a delay in any subsequent tasks (free float) or the whole project (total float). Positive slack would indicate ahead of schedule; negative slack would indicate behind schedule; and zero slack would indicate on schedule. • critical path: the longest possible continuous pathway taken from the initial event to the terminal event. It determines the total calendar time required for the project; and, therefore, any time delays along the critical path will delay the reaching of the terminal event by at least the same amount. • critical activity: An activity that has total float equal to zero. An activity with zero float is not necessarily on the critical path since its path may not be the longest. • Lead [4] time: the time by which a predecessor event must be completed in order to allow sufficient time for the activities that must elapse before a specific PERT event reaches completion. • lag time: the earliest time by which a successor event can follow a specific PERT event. • fast tracking: performing more critical activities in parallel • crashing critical path: Shortening duration of critical activities
Implementation The first step to scheduling the project is to determine the tasks that the project requires and the order in which they must be completed. The order may be easy to record for some tasks (e.g. When building a house, the land must be graded before the foundation can be laid) while difficult for others (There are two areas that need to be graded, but there are only enough bulldozers to do one). Additionally, the time estimates usually reflect the normal, non-rushed time. Many times, the time required to execute the task can be reduced for an additional cost or a reduction in the quality. In the following example there are seven tasks, labeled A through G. Some tasks can be done concurrently (A and B) while others cannot be done until their predecessor task is complete (C cannot begin until A is complete). Additionally, each task has three time estimates: the optimistic time estimate (O), the most likely or normal time estimate (M), and the pessimistic time estimate (P). The expected time (TE) is computed using the formula (O + 4M + P) ÷ 6. Activity Predecessor
Time estimates
Expected time
Opt. (O) Normal (M) Pess. (P) A
—
2
4
6
4.00
B
—
3
5
9
5.33
C
A
4
5
7
5.17
D
A
4
6
10
6.33
E
B, C
4
5
7
5.17
F
D
3
4
8
4.50
G
E
3
5
8
5.17
Once this step is complete, one can draw a Gantt chart or a network diagram.
Program Evaluation and Review Technique
A Gantt chart created using Microsoft Project (MSP). Note (1) the critical path is in red, (2) the slack is the black lines connected to non-critical activities, (3) since Saturday and Sunday are not work days and are thus excluded from the schedule, some bars on the Gantt chart are longer if they cut through a weekend.
A Gantt chart created using OmniPlan. Note (1) the critical path is highlighted, (2) the slack is not specifically indicated on task 5 (d), though it can be observed on tasks 3 and 7 (b and f), (3) since weekends are indicated by a thin vertical line, and take up no additional space on the work calendar, bars on the Gantt chart are not longer or shorter when they do or don't carry over a weekend.
A network diagram can be created by hand or by using diagram software. There are two types of network diagrams, activity on arrow (AOA) and activity on node (AON). Activity on node diagrams are generally easier to create and interpret. To create an AON diagram, it is recommended (but not required) to start with a node named start. This "activity" has a duration of zero (0). Then you draw each activity that does not have a predecessor activity (a and b in this example) and connect them with an arrow from start to each node. Next, since both c and d list a as a predecessor activity, their nodes are drawn with arrows coming from a. Activity e is listed with b and c as predecessor activities, so node e is drawn with arrows coming from both b and c, signifying that e cannot begin until both b and c have been completed. Activity f has d as a predecessor activity, so an arrow is drawn connecting the activities. Likewise, an arrow is drawn from e to g. Since there are no activities that come after f or g, it is recommended (but again not required) to connect them to a node labeled finish.
A network diagram created using Microsoft Project (MSP). Note the critical path is in red.
158
Program Evaluation and Review Technique
159
By itself, the network diagram pictured above does not give much more information than a Gantt chart; however, it can be expanded to display more information. The most common information shown is: 1. 2. 3. 4. 5. 6. 7.
The activity name The normal duration time The early start time (ES) The early finish time (EF) The late start time (LS) The late finish time (LF) The slack
A node like this one (from Microsoft Visio) can be used to display the activity name, duration, ES, EF, LS, LF, and slack.
In order to determine this information it is assumed that the activities and normal duration times are given. The first step is to determine the ES and EF. The ES is defined as the maximum EF of all predecessor activities, unless the activity in question is the first activity, for which the ES is zero (0). The EF is the ES plus the task duration (EF = ES + duration). • The ES for start is zero since it is the first activity. Since the duration is zero, the EF is also zero. This EF is used as the ES for a and b. • The ES for a is zero. The duration (4 work days) is added to the ES to get an EF of four. This EF is used as the ES for c and d. • The ES for b is zero. The duration (5.33 work days) is added to the ES to get an EF of 5.33. • The ES for c is four. The duration (5.17 work days) is added to the ES to get an EF of 9.17. • The ES for d is four. The duration (6.33 work days) is added to the ES to get an EF of 10.33. This EF is used as the ES for f. • The ES for e is the greatest EF of its predecessor activities (b and c). Since b has an EF of 5.33 and c has an EF of 9.17, the ES of e is 9.17. The duration (5.17 work days) is added to the ES to get an EF of 14.34. This EF is used as the ES for g. • The ES for f is 10.33. The duration (4.5 work days) is added to the ES to get an EF of 14.83. • The ES for g is 14.34. The duration (5.17 work days) is added to the ES to get an EF of 19.51. • The ES for finish is the greatest EF of its predecessor activities (f and g). Since f has an EF of 14.83 and g has an EF of 19.51, the ES of finish is 19.51. Finish is a milestone (and therefore has a duration of zero), so the EF is also 19.51. Barring any unforeseen events, the project should take 19.51 work days to complete. The next step is to determine the late start (LS) and late finish (LF) of each activity. This will eventually show if there are activities that have slack. The LF is defined as the minimum LS of all successor activities, unless the activity is the last activity, for which the LF equals the EF. The LS is the LF minus the task duration (LS = LF - duration). • The LF for finish is equal to the EF (19.51 work days) since it is the last activity in the project. Since the duration is zero, the LS is also 19.51 work days. This will be used as the LF for f and g. • The LF for g is 19.51 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 14.34 work days. This will be used as the LF for e. • The LF for f is 19.51 work days. The duration (4.5 work days) is subtracted from the LF to get an LS of 15.01 work days. This will be used as the LF for d. • The LF for e is 14.34 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 9.17 work days. This will be used as the LF for b and c. • The LF for d is 15.01 work days. The duration (6.33 work days) is subtracted from the LF to get an LS of 8.68 work days. • The LF for c is 9.17 work days. The duration (5.17 work days) is subtracted from the LF to get an LS of 4 work days.
Program Evaluation and Review Technique • The LF for b is 9.17 work days. The duration (5.33 work days) is subtracted from the LF to get an LS of 3.84 work days. • The LF for a is the minimum LS of its successor activities. Since c has an LS of 4 work days and d has an LS of 8.68 work days, the LF for a is 4 work days. The duration (4 work days) is subtracted from the LF to get an LS of 0 work days. • The LF for start is the minimum LS of its successor activities. Since a has an LS of 0 work days and b has an LS of 3.84 work days, the LS is 0 work days. The next step is to determine the critical path and if any activities have slack. The critical path is the path that takes the longest to complete. To determine the path times, add the task durations for all available paths. Activities that have slack can be delayed without changing the overall time of the project. Slack is computed in one of two ways, slack = LF - EF or slack = LS - ES. Activities that are on the critical path have a slack of zero (0). • The duration of path adf is 14.83 work days. • The duration of path aceg is 19.51 work days. • The duration of path beg is 15.67 work days. The critical path is aceg and the critical time is 19.51 work days. It is important to note that there can be more than one critical path (in a project more complex than this example) or that the critical path can change. For example, let's say that activities d and f take their pessimistic (b) times to complete instead of their expected (TE) times. The critical path is now adf and the critical time is 22 work days. On the other hand, if activity c can be reduced to one work day, the path time for aceg is reduced to 15.34 work days, which is slightly less than the time of the new critical path, beg (15.67 work days). Assuming these scenarios do not happen, the slack for each activity can now be determined. • Start and finish are milestones and by definition have no duration, therefore they can have no slack (0 work days). • The activities on the critical path by definition have a slack of zero; however, it is always a good idea to check the math anyway when drawing by hand. • LFa - EFa = 4 - 4 = 0 • LFc - EFc = 9.17 - 9.17 = 0 • LFe - EFe = 14.34 - 14.34 = 0 • LFg - EFg = 19.51 - 19.51 = 0 • Activity b has an LF of 9.17 and an EF of 5.33, so the slack is 3.84 work days. • Activity d has an LF of 15.01 and an EF of 10.33, so the slack is 4.68 work days. • Activity f has an LF of 19.51 and an EF of 14.83, so the slack is 4.68 work days. Therefore, activity b can be delayed almost 4 work days without delaying the project. Likewise, activity d or activity f can be delayed 4.68 work days without delaying the project (alternatively, d and f can be delayed 2.34 work days each).
160
Program Evaluation and Review Technique
161
Advantages • PERT chart explicitly defines and makes visible dependencies (precedence relationships) between the work breakdown structure (commonly WBS) elements
A completed network diagram created using Microsoft Visio. Note the critical path is in red.
• PERT facilitates identification of the critical path and makes this visible • PERT facilitates identification of early start, late start, and slack for each activity, • PERT provides for potentially reduced project duration due to better understanding of dependencies leading to improved
overlapping of activities and tasks where feasible. • The large amount of project data can be organized & presented in diagram for use in decision making.
Disadvantages • • • •
There can be potentially hundreds or thousands of activities and individual dependency relationships PERT is not easily scalable for smaller projects The network charts tend to be large and unwieldy requiring several pages to print and requiring special size paper The lack of a timeframe on most PERT/CPM charts makes it harder to show status although colours can help (e.g., specific colour for completed nodes) • When the PERT/CPM charts become unwieldy, they are no longer used to manage the project.
Uncertainty in project scheduling During project execution, however, a real-life project will never execute exactly as it was planned due to uncertainty. It can be ambiguity resulting from subjective estimates that are prone to human errors or it can be variability arising from unexpected events or risks. The main reason that PERT may provide inaccurate information about the project completion time is due to this schedule uncertainty. This inaccuracy is large enough to render such estimates as not helpful. One possibility to maximize solution robustness is to include safety in the baseline schedule in order to absorb the anticipated disruptions. This is called proactive scheduling. A pure proactive scheduling is a utopia, incorporating safety in a baseline schedule that allows to cope with every possible disruption would lead to a baseline schedule with a very large make-span. A second approach, reactive scheduling, consists of defining a procedure to react to disruptions that cannot be absorbed by the baseline schedule.
Program Evaluation and Review Technique
References [1] Fazar, W., "Program Evaluation and Review Technique", The American Statistician, Vol. 13, No. 2, (April 1959), p.10. [2] Malcolm, D. G., J. H. Roseboom, C. E. Clark, W. Fazar Application of a Technique for Research and Development Program Evaluation OPERATIONS RESEARCH Vol. 7, No. 5, September-October 1959, pp. 646-669 [3] 1968 Winter Olympics official report. (http:/ / www. la84foundation. org/ 6oic/ OfficialReports/ 1968/ or1968. pdf) p. 49. Accessed 1 November 2010. & [4] http:/ / en. wiktionary. org/ wiki/ lead#Verb_2
Further reading • Project Management Institute (2003). A Guide To The Project Management Body Of Knowledge (3rd ed. ed.). Project Management Institute. ISBN 1-930699-45-X. • Klastorin, Ted (2003). Project Management: Tools and Trade-offs (3rd ed. ed.). Wiley. ISBN 978-0-471-41384-4. • Harold Kerzner (2003). Project Management: A Systems Approach to Planning, Scheduling, and Controlling (8th Ed. ed.). Wiley. ISBN 0-471-22577-0. • Milosevic, Dragan Z. (2003). Project Management ToolBox: Tools and Techniques for the Practicing Project Manager. Wiley. ISBN 978-0-471-20822-8.
External links • More explanation of PERT (http://www.netmba.com/operations/project/pert)
162
Beta distribution
163
Beta distribution Beta Probability density function
Cumulative distribution function
Notation
Beta(α, β)
Parameters
α > 0 shape (real) β > 0 shape (real)
Support pdf CDF Mean
(see digamma function and see section: Geometric mean) Median
Mode
for α, β >1
Variance
(see trigamma function and see section: Geometric variance) Skewness
Beta distribution
164
Ex. kurtosis Entropy MGF CF
(see Confluent hypergeometric function)
Fisher information
see section: Fisher information matrix
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines. For example, it has been used as a statistical description of allele frequencies in population genetics;[] time allocation in project management / control systems;[] sunshine data;[] variability of soil properties;[] proportions of the minerals in rocks in stratigraphy;[1] and heterogeneity in the probability of HIV transmission.[2] In Bayesian inference, the beta distribution is the conjugate prior probability distribution for the Bernoulli, binomial and geometric distributions. For example, the beta distribution can be used in Bayesian analysis to describe initial knowledge concerning probability of success such as the probability that a space vehicle will successfully complete a specified mission. The beta distribution is a suitable model for the random behavior of percentages and proportions. The usual formulation of the beta distribution is also known as the beta distribution of the first kind, whereas beta distribution of the second kind is an alternative name for the beta prime distribution.
Characterization Probability density function The probability density function of the beta distribution, for 0 ≤ x ≤ 1, and shape parameters α > 0 and β > 0, is a power function of the variable x and of its reflection (1−x) as follows:
where Γ(z) is the gamma function. The beta function,
, appears as a normalization constant to ensure that the total
probability integrates to unity. In the above equations x is a realization—an observed value that actually occurred—of a random process X. This definition includes both ends x = 0 and x = 1, which is consistent with definitions for other continuous distributions supported on a bounded interval which are special cases of the beta distribution, for example the arcsine distribution, and consistent with several authors, such as N. L. Johnson and S. Kotz.[][][][] However, several other authors, including W. Feller,[][][] choose to exclude the ends x = 0 and x = 1, (such that the two ends are not actually
Beta distribution
165
part of the density function) and consider instead 0 < x < 1. Several authors, including N. L. Johnson and S. Kotz,[] use the symbols p and q (instead of α and β) for the shape parameters of the beta distribution, reminiscent of the symbols traditionally used for the parameters of the Bernoulli distribution, because the beta distribution approaches the Bernoulli distribution in the limit as both shape parameters α and β approach the value of zero. In the following, that a random variable X is beta-distributed with parameters α and β will be denoted by:[][]
Other notations for beta-distributed random variables used in the statistical literature are
[]
and
[]
.
Cumulative distribution function The cumulative distribution function is
where
is the incomplete beta function and
is
the regularized incomplete beta function.
Properties Measures of central tendency
CDF for symmetric beta distribution vs. x and alpha=beta
Mode The mode of a Beta distributed random variable X with α, β > 1 is given by the following expression:[]
When both parameters are less than one (α, β < 1), this is the anti-mode: the lowest point of the probability density curve.[] CDF for skewed beta distribution vs. x and beta= 5 alpha
Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β > 1 the mode (resp. anti-mode when α, β < 1), is at the center of the distribution: it is symmetric in those cases. See "Shapes" section in this article for a full list of mode cases, for arbitrary values of α and β. For several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the (maximum) value of the density function occurring at the end is finite, for example in the case of α = 2, β = 1 (or α = 1, β = 2), the right-triangle distribution, while in several other cases there is a singularity at the end, and hence the value of the density function approaches infinity at the end, for example in the case α = β = 1/2, the arcsine distribution. The choice whether to include, or not to include, the ends x = 0, and x = 1, as part of the density function, whether a singularity can be considered to be a mode, and whether cases with two maxima are to be considered bimodal, is responsible for some authors considering these maximum values at the end of the density distribution to be considered[] modes or not.[]
Beta distribution
166
Mode for Beta distribution for 1 ≤ α ≤ 5 and 1 ≤ β ≤ 5
Median The median of the beta distribution is the unique real number for
which
the
regularized incomplete beta function . There is no general closed-form expression for the median of the beta distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow:[citation needed]
• For symmetric cases α = β, median = 1/2. • For α = 1 and β > 0, median = (this case is the
Median for Beta distribution for 0≤α≤5 and 0≤β≤5
mirror-image of the power function [0,1] distribution) • For α > 0 and β = 1, median = (this case is the power (Mean - Median) for Beta distribution versus
function [0,1] distribution[]) alpha and beta from 0 to 2 • For α = 3 and β = 2, median = 0.6142724318676105... the real [0,1] solution to the quartic equation 1−8x3+6x4 = 0. • For α = 2 and β = 3, median = 0.38572756813238945... = 1−median(Beta(3, 2))
The following are the limits with one parameter finite (non zero) and the other approaching these limits:[citation needed]
Beta distribution
167
A reasonable approximation of the value of the median of the beta distribution, for both α and β greater or equal to one, is given by the formula[3]
For α ≥ 1 and β ≥ 1, the relative error (the absolute error divided by the median) in this approximation is less than 4% and for both α ≥ 2 and β ≥ 2 it is less than 1%. The absolute error divided by the difference between the mean and the mode is similarly small:
Mean The expected value (mean) (μ) of a Beta distribution random variable X with two parameters α and β is a function of only the ratio β/α of these parameters:[]
Mean for Beta distribution for 0 ≤ α ≤ 5 and 0 ≤ β ≤ 5
Letting α = β in the above expression one obtains μ = 1/2, showing that for α = β the mean is at the center of the distribution: it is symmetric. Also, the following limits can be obtained from the above expression:
Beta distribution
168
Therefore, for β/α → 0, or for α/β → ∞, the mean is located at the right end, x = 1. For these limit ratios, the beta distribution becomes a one-point degenerate distribution with a Dirac delta function spike at the right end, x = 1, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the right end, x = 1. Similarly, for β/α → ∞, or for α/β → 0, the mean is located at the left end, x = 0. The beta distribution becomes a 1 point Degenerate distribution with a Dirac delta function spike at the left end, x = 0, with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the left end, x = 0. Following are the limits with one parameter finite (non zero) and the other approaching these limits:
While for typical unimodal distributions (with centrally located modes, inflexion points at both sides of the mode, and longer tails) (with Beta(α, β) such that α, β > 2) it is known that the sample mean (as an estimate of location) is not as robust as the sample median, the opposite is the case for uniform or "U-shaped" bimodal distributions (with Beta(α, β) such that α, β ≤ 1), with the modes located at the ends of the distribution. As Mosteller and Tukey remark ([] p. 207) "the average of the two extreme observations uses all the sample information. This illustrates how, for short-tailed distributions, the extreme observations should get more weight." By contrast, it follows that the median of "U-shaped" bimodal distributions with modes at the edge of the distribution (with Beta(α, β) such that α, β ≤ 1) is not robust, as the sample median drops the extreme sample observations from consideration. A practical application of this occurs for example for random walks, since the probability for the time of the last visit to the origin in a random walk is distributed as the arcsine distribution Beta(1/2, 1/2):[][] the mean of a number of realizations of a random walk is a much more robust estimator than the median (which is an inappropriate sample measure estimate in this case). Geometric mean The logarithm of the geometric mean GX of a distribution with random variable X is the arithmetic mean of ln(X), or, equivalently, its expected value:
For a beta distribution, the expected value integral gives:
(Mean - GeometricMean) for Beta distribution versus α and β from 0 to 2, showing the asymmetry between α and β for the geometric mean
Beta distribution
Geometric means for Beta distribution Purple = G(x), Yellow = G(1−x), smaller values alpha and beta in front
Geometric Means for Beta distribution Purple = G(x), Yellow = G(1−x), larger values alpha and beta in front
where ψ is the digamma function. Therefore the geometric mean of a beta distribution with shape parameters α and β is the exponential of the digamma functions of α and β as follows:
While for a beta distribution with equal shape parameters α = β, it follows that skewness = 0 and mode = mean = median = 1/2, the geometric mean is less than 1/2: 0 < GX < 1/2. The reason for this is that the logarithmic transformation strongly weights the values of X close to zero, as ln(X) strongly tends towards negative infinity as X approaches zero, while ln(X) flattens towards zero as X → 1. Along a line α = β, the following limits apply:
Following are the limits with one parameter finite (non zero) and the other approaching these limits:
The accompanying plot shows the difference between the mean and the geometric mean for shape parameters α and β from zero to 2. Besides the fact that the difference between them approaches zero as α and β approach infinity and that the difference becomes large for values of α and β approaching zero, one can observe an evident asymmetry of the geometric mean with respect to the shape parameters α and β. The difference between the geometric mean and
169
Beta distribution the mean is larger for small values of α in relation to β than when exchanging the magnitudes of β and α. N.L.Johnson and S.Kotz[] suggest the logarithmic approximation to the digamma function ψ(α) ≈ ln(α-1/2) which results in the following approximation to the geometric mean: Numerical values for the relative error in this approximation follow: [(α = β = 1): 9.39%]; [(α = β = 2): 1.29%]; [(α = 2, β = 3): 1.51%]; [(α = 3, β = 2): 0.44%]; [(α = β = 3): 0.51%]; [(α = β = 4): 0.26%];[(α = 3, β = 4): 0.55%]; [(α = 4, β = 3): 0.24%]. Similarly, one can calculate the value of shape parameters required for the geometric mean to equal 1/2. Let's say that we know one of the parameters, β, what would be the value of the other parameter, α, required for the geometric mean to equal 1/2 ?. The answer is that (for β > 1), the value of α required tends towards β + 1/2 as β → ∞. For example, all these couples have the same geometric mean of 1/2: [β = 1, α = 1.4427], [β = 2, α = 2.46958], [β = 3, α = 3.47943], [β = 4, α = 4.48449], [β = 5, α = 5.48756], [β = 10, α = 10.4938], [β = 100, α = 100.499]. The fundamental property of the geometric mean, which can be proven to be false for any other mean, is
This makes the geometric mean the only correct mean when averaging normalized results, that is results that are presented as ratios to reference values.[4] This is relevant because the beta distribution is a suitable model for the random behavior of percentages and it is particularly suitable to the statistical modelling of proportions. The geometric mean plays a central role in maximum likelihood estimation, see section "Parameter estimation, maximum likelihood." Actually, when performing maximum likelihood estimation, besides the geometric mean GX based on the random variable X, also another geometric mean appears naturally: the geometric mean based on the linear transformation (1−X), the mirror-image of X, denoted by G(1−X): Along a line α = β, the following limits apply:
Following are the limits with one parameter finite (non zero) and the other approaching these limits:
It has the following approximate value: Although both GX and G(1−X) are asymmetric, in the case that both shape parameters are equal α = β, the geometric means are equal: GX= G(1−X). This equality follows from the following symmetry displayed between both geometric means:
170
Beta distribution
171
Harmonic mean The inverse of the harmonic mean (HX) of a distribution with random variable X is the arithmetic mean of 1/X, or, equivalently, its expected value. Therefore, the harmonic mean (HX) of a beta distribution with shape parameters α and β is: The harmonic mean (HX) of a Beta distribution with α < 1 is undefined, because its defining expression is not bounded in [0, 1] for shape parameter α less than unity. Letting α = β in the above expression one obtains
Harmonic mean for Beta distribution for 0 1 and β > 1), and the mean in terms of α and β:
If 1 < β < α then the order of the inequalities are reversed. For α > 1 and β > 1 the absolute distance between the mean and the median is less than 5% of the distance between the maximum and minimum values of x. On the other hand, the absolute distance between the mean and the mode can reach 50% of the distance between the maximum and minimum values of x, for the (pathological) case of α = 1 and β = 1 (for which values the beta distribution approaches the uniform distribution and the differential entropy approaches its maximum value, and hence maximum "disorder"). For example, for α = 1.0001 and β = 1.00000001: • mode = 0.9999; PDF(mode) = 1.00010 • mean = 0.500025; PDF(mean) = 1.00003 • median = 0.500035; PDF(median) = 1.00003 • mean − mode = −0.499875 • mean − median = −9.65538 × 10−6 (where PDF stands for the value of the probability density function)
Mean, geometric mean and harmonic mean relationship It is known from the inequality of arithmetic and geometric means that the geometric mean is lower than the mean. Similarly, the harmonic mean is lower than the geometric mean. The accompanying plot shows that for α = β, both the mean and the median are exactly equal to 1/2, regardless of the value of α = β, and the mode is also equal to 1/2 for α = β > 1, however the geometric and harmonic means are lower than 1/2 and they only approach this value asymptotically as α = β → ∞. :Mean, Median, Geometric Mean and Harmonic Mean for Beta distribution with 0 < α = β < 5
Beta distribution
190
Kurtosis bounded by the square of the skewness
Beta distribution α and β parameters vs. excess Kurtosis and squared Skewness
As remarked by Feller,[] in the Pearson system the beta probability density appears as type I (any difference between the beta distribution and Pearson's type I distribution is only superficial and it makes no difference for the following discussion regarding the relationship between kurtosis and skewness). Karl Pearson showed, in Plate 1 of his paper [] published in 1916, a graph with the kurtosis as the vertical axis (ordinate) and the square of the skewness as the horizontal axis (abscissa), in which a number of distributions were displayed.[] The region occupied by the beta distribution is bounded by the following two lines in the (skewness2,kurtosis) plane, or the (skewness2,excess kurtosis) plane:
or, equivalently,
(At a time when there were no powerful digital computers), Karl Pearson accurately computed further boundaries,[][] for example, separating the "U-shaped" from the "J-shaped" distributions. The lower boundary line (excess kurtosis + 2 − skewness2 = 0) is produced by skewed "U-shaped" beta distributions with both values of shape parameters α and β close to zero. The upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. Karl Pearson showed [] that this upper boundary line (excess kurtosis − (3/2) skewness2 = 0) is also the intersection with Pearson's distribution III, which has unlimited support in one direction (towards positive infinity), and can be bell-shaped or J-shaped. His son, Egon Pearson, showed [] that the region (in the kurtosis/squared-skewness plane) occupied by the beta distribution (equivalently, Pearson's distribution I) as it approaches this boundary (excess kurtosis − (3/2) skewness2 = 0) is shared with the noncentral chi-squared distribution. Karl Pearson[] (Pearson 1895, pp. 357, 360, 373–376) also showed that the gamma distribution is a Pearson type III distribution. Hence this boundary line for Pearson's type III distribution is known as the gamma line. (This can be shown from the fact that the excess kurtosis of the gamma distribution is 6/k and the square of the skewness is 4/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied by the gamma distribution regardless of the value of the parameter "k"). Pearson later noted that the chi-squared distribution is a special case of Pearson's type III and also shares this boundary line (as it is apparent from the fact that for the chi-squared distribution the excess kurtosis is 12/k and the square of the skewness is 8/k, hence (excess kurtosis − (3/2) skewness2 = 0) is identically satisfied regardless of the value of the parameter "k"). This is to be expected, since the chi-squared distribution X ~ χ2(k) is a special case of the gamma distribution, with parametrization X ~ Γ(k/2, 1/2) where k is a positive integer that specifies the "number of degrees of freedom" of the chi-squared distribution. An example of a beta distribution near the upper boundary (excess kurtosis − (3/2) skewness2 = 0) is given by α = 0.1, β = 1000, for which the ratio (excess kurtosis)/(skewness2) = 1.49835 approaches the upper limit of 1.5 from below. An example of a beta distribution near the lower boundary (excess kurtosis + 2 − skewness2 = 0) is given by α= 0.0001, β = 0.1, for which values the expression (excess kurtosis + 2)/(skewness2) = 1.01621 approaches the lower limit of 1 from above. In the infinitesimal limit for both α and β approaching zero symmetrically, the excess kurtosis reaches its minimum value at −2. This minimum value occurs at the point at which the lower boundary line intersects the vertical axis (ordinate). (However, in Pearson's original chart, the ordinate is kurtosis, instead of excess kurtosis, and it increases downwards rather than upwards).
Beta distribution
191
Values for the skewness and excess kurtosis below the lower boundary (excess kurtosis + 2 − skewness2 = 0) cannot occur for any distribution, and hence Karl Pearson appropriately called the region below this boundary the "impossible region." The boundary for this "impossible region" is determined by (symmetric or skewed) bimodal "U"-shaped distributions for which parameters α and β approach zero and hence all the probability density is concentrated at each end: x = 0 and x = 1 with practically nothing in between them. Since for α ≈ β ≈ 0 the probability density is concentrated at the two ends x = 0 and x = 1, this "impossible boundary" is determined by a 2-point distribution: the probability can only take 2 values (Bernoulli distribution), one value with probability p and the other with probability q = 1−p. For cases approaching this limit boundary with symmetry α = β, skewness ≈ 0, excess kurtosis ≈ −2 (this is the lowest excess kurtosis possible for any distribution), and the probabilities are p ≈ q ≈ 1/2. For cases approaching this limit boundary with skewness, excess kurtosis ≈ −2 + skewness2, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1.
Symmetry All statements are conditional on α, β > 0 • Probability density function reflection symmetry
• Cumulative distribution function reflection symmetry plus unitary translation
• Mode reflection symmetry plus unitary translation
• Median reflection symmetry plus unitary translation
• Mean reflection symmetry plus unitary translation
• Geometric Means each is individually asymmetric, the following symmetry applies between the geometric mean based on X and the geometric mean based on its reflection (1-X)
• Harmonic means each is individually asymmetric, the following symmetry applies between the harmonic mean based on X and the harmonic mean based on its reflection (1-X) . • Variance symmetry
• Geometric variances each is individually asymmetric, the following symmetry applies between the log geometric variance based on X and the log geometric variance based on its reflection (1-X)
• Geometric covariance symmetry
• Mean absolute deviation around the mean symmetry
• Skewness skew-symmetry
Beta distribution
192
• Excess kurtosis symmetry
• Characteristic function symmetry of Real part (with respect to the origin of variable "t")
• Characteristic function skew-symmetry of Imaginary part (with respect to the origin of variable "t")
• Characteristic function symmetry of Absolute value (with respect to the origin of variable "t")
• Differential entropy symmetry
• Relative Entropy (also called Kullback–Leibler divergence) symmetry
• Fisher information matrix symmetry
Geometry of the probability distribution function Inflection points For certain values of the shape parameters α and β, the probability distribution function has inflection points, at which the curvature changes sign. The position of these inflection points can be useful as a measure of the dispersion or spread of the distribution. Defining the following quantity:
Points of inflection occur,[][][][] depending on the value of the shape parameters α and β, as follows:
Inflection point location versus α and β showing showing regions with one inflection point
• (α > 2, β > 2) The distribution is bell-shaped (symmetric for α = β and skewed otherwise), with two inflection points, equidistant from the mode:
• (α = 2, β > 2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
Inflection point location versus α and β showing region with two inflection points
• (α > 2, β = 2) The distribution is unimodal, negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
Beta distribution
193
• (1 < α < 2, β > 2) The distribution is unimodal, positively skewed, right-tailed, with one inflection point, located to the right of the mode:
• (0 < α < 1, 1 < β < 2) The distribution has a mode at the left end x = 0 and it is positively skewed, right-tailed. There is one inflection point, located to the right of the mode:
• (α > 2, 1 < β < 2) The distribution is unimodal negatively skewed, left-tailed, with one inflection point, located to the left of the mode:
• (1 < α < 2, 0 < β < 1) The distribution has a mode at the right end x=1 and it is negatively skewed, left-tailed. There is one inflection point, located to the left of the mode:
There are no inflection points in the remaining (symmetric and skewed) regions: U-shaped: (α < 1, β < 1) upside-down-U-shaped: (1 < α < 2, 1 < β < 2), reverse-J-shaped (α < 1, β > 2) or J-shaped: (α > 2, β < 1) The accompanying plots show the inflection point locations (shown vertically, ranging from 0 to 1) versus α and β (the horizontal axes ranging from 0 to 5). There are large cuts at surfaces intersecting the lines α = 1, β = 1, α = 2, and β = 2 because at these values the beta distribution change from 2 modes, to 1 mode to no mode. Shapes The beta density function can take a wide variety of different shapes depending on the values of the two parameters α and β. The ability of the beta distribution to take this great diversity of shapes (using only two parameters) is partly responsible for finding wide application for modeling actual measurements: Symmetric (α = β) • the density function is symmetric about 1/2 (blue & teal plots). • median = mean = 1/2. • skewness = 0. • α=β1 • symmetric unimodal • mode = 1/2. • 0 < var(X) < 1/12[] • −6/5 < excess kurtosis(X) < 0
PDF for skewed beta distribution vs. x and beta= 2.5 alpha from 0 to 9
• α = β = 3/2 is a semi-elliptic [0, 1] distribution, see: Wigner semicircle distribution • var(X) = 1/16. • excess kurtosis(X) = −1 • α = β = 2 is the parabolic [0, 1] distribution • var(X) = 1/20 • excess kurtosis(X) = −6/7 • α = β > 2 is bell-shaped, with inflection points located to either side of the mode
PDF for skewed beta distribution vs. x and beta= 5.5 alpha from 0 to 9
PDF for skewed beta distribution vs. x and beta= 8 alpha from 0 to 10
• 0 < var(X) < 1/20 • −6/7 < excess kurtosis(X) < 0
Beta distribution
195
• α = β → ∞ is a 1 point Degenerate distribution with a Dirac delta function spike at the midpoint x = 1/2 with probability 1, and zero probability everywhere else. There is 100% probability (absolute certainty) concentrated at the single point x = 1/2. • • • The differential entropy approaches a minimum value of −∞ Skewed (α ≠ β) The density function is skewed. An interchange of parameter values yields the mirror image (the reverse) of the initial curve, some more specific cases: • α < 1, β < 1 • U-shaped • Positive skew for α < β, negative skew for α > β. • bimodal: left mode = 0, right mode = 1, anti-mode = • 0 < median < 1. • 0 < var(X) < 1/4 • α > 1, β > 1 • unimodal (magenta & cyan plots), • Positive skew for α < β, negative skew for α > β. • • 0 < median < 1 • 0 < var(X) < 1/12 • α < 1, β ≥ 1 • • • • •
reverse J-shaped with a right tail, positively skewed, strictly decreasing, convex mode = 0 0 < median < 1/2.
•
(maximum variance occurs for
, or α = Φ the golden ratio
(maximum variance occurs for
, or β = Φ the golden ratio
conjugate) • α ≥ 1, β < 1 • • • • •
J-shaped with a left tail, negatively skewed, strictly increasing, convex mode = 1 1/2 < median < 1
• conjugate) • α = 1, β > 1
• positively skewed, • strictly decreasing (red plot), • a reversed (mirror-image) power function [0,1] distribution • mode = 0 • α = 1, 1 < β < 2
Beta distribution
196
• concave • • 1/18 < var(X) < 1/12. • α = 1, β = 2 • a straight line with slope −2, the right-triangular distribution with right angle at the left end, at x = 0 • • var(X) = 1/18 • α = 1, β > 2 • reverse J-shaped with a right tail, • convex • • 0 < var(X) < 1/18 • α > 1, β = 1 • negatively skewed, • strictly increasing (green plot), • the power function [0, 1] distribution[] • mode =1 • 2 > α > 1, β = 1 • concave • • 1/18 < var(X) < 1/12 • α = 2, β = 1 • a straight line with slope +2, the right-triangular distribution with right angle at the right end, at x = 1 • • var(X) = 1/18 • α > 2, β = 1 • J-shaped with a left tail, convex • • 0 < var(X) < 1/18
Parameter estimation Method of moments Two unknown parameters Two unknown parameters (
of a beta distribution supported in the [0,1] interval) can be estimated, using the
method of moments, with the first two moments (sample mean and sample variance) as follows. Let:
be the sample mean estimate and
be the sample variance estimate. The method-of-moments estimates of the parameters are
Beta distribution
197
conditional on
conditional on When the distribution is required over known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace
with
and
with
in the above couple of equations for the
shape parameters (see "Alternative parametrizations, four parameters" section below).,[7] where:
Four unknown parameters All four parameters (
of a beta distribution supported in the
[a, c] interval -see section "Alternative parametrizations, Four parameters"-) can be estimated, using the method of moments developed by Karl Pearson, by equating sample and population values of the first four central moments (mean, variance, skewness and excess kurtosis).[][][] The excess kurtosis was expressed in terms of the square of the skewness, and the sample size ν = α + β, (see previous section "Kurtosis") as follows: Solutions for parameter estimates vs. (sample) excess Kurtosis and (sample) squared Skewness Beta distribution
One can use this equation to solve for the sample size ν= α + β in terms of the square of the skewness and the excess kurtosis as follows:[] This is the ratio (multiplied by a factor of 3) between the previously derived limit boundaries for the beta distribution in a space (as originally done by Karl Pearson[]) defined with coordinates of the square of the skewness in one axis and the excess kurtosis in the other axis (see previous section titled "Kurtosis bounded by the square of the skewness"): The case of zero skewness, can be immediately solved because for zero skewness, α = β and hence ν = 2α = 2β, therefore α = β = ν/2
(Excess kurtosis is negative for the beta distribution with zero skewness, ranging from -2 to 0, so that
-and
therefore the sample shape parameters- is positive, ranging from zero when the shape parameters approach zero and the excess kurtosis approaches -2, to infinity when the shape parameters approach infinity and the excess kurtosis approaches zero). For non-zero sample skewness one needs to solve a system of two coupled equations. Since the skewness and the excess kurtosis are independent of the parameters , the parameters can be uniquely determined from the
Beta distribution
198
sample skewness and the sample excess kurtosis, by solving the coupled equations with two known variables (sample skewness and sample excess kurtosis) and two unknowns (the shape parameters):
resulting in the following solution:[] Where one should take the solutions as follows:
for (negative) sample skewness < 0, and
for
(positive) sample skewness > 0. The accompanying plot shows these two solutions as surfaces in a space with horizontal axes of (sample excess kurtosis) and (sample squared skewness) and the shape parameters as the vertical axis. The surfaces are constrained by the condition that the sample excess kurtosis must be bounded by the sample squared skewness as stipulated in the above equation. The two surfaces meet at the right edge defined by zero skewness. Along this right edge, both parameters are equal and the distribution is symmetric U-shaped for α = β < 1, uniform for α = β = 1, upside-down-U-shaped for 1 < α = β < 2 and bell-shaped for α = β > 2. The surfaces also meet at the front (lower) edge defined by "the impossible boundary" line (excess kurtosis + 2 - skewness2 = 0). Along this front (lower) boundary both shape parameters approach zero, and the probability density is concentrated more at one end than the other end (with practically nothing in between), with probabilities at the left end x = 0 and at the right end x = 1. The two surfaces become further apart towards the rear edge. At this rear edge the surface parameters are quite different from each other. As remarked, for example, by Bowman and Shenton,[] sampling in the neighborhood of the line (sample excess kurtosis - (3/2)(sample skewness)2 = 0) (the just-J-shaped portion of the rear edge where blue meets beige), "is dangerously near to chaos", because at that line the denominator of the expression above for the estimate ν = α + β becomes zero and hence ν approaches infinity as that line is approached. Bowman and Shenton [] write that "the higher moment parameters (kurtosis and skewness) are extremely fragile (near that line). However the mean and standard deviation are fairly reliable." Therefore the problem is for the case of four parameter estimation for very skewed distributions such that the excess kurtosis approaches (3/2) times the square of the skewness. This boundary line is produced by extremely skewed distributions with very large values of one of the parameters and very small values of the other parameter. See section titled "Kurtosis bounded by the square of the skewness" for a numerical example and further comments about this rear edge boundary line (sample excess kurtosis - (3/2)(sample skewness)2 = 0). As remarked by Karl Pearson himself [] this issue may not be of much practical importance as this trouble arises only for very skewed J-shaped (or mirror-image J-shaped) distributions with very different values of shape parameters that are unlikely to occur much in practice). The usual skewed skewed-bell-shape distributions that occur in practice do not have this parameter estimation problem. The remaining two parameters [][]
variety of equations.
can be determined using the sample mean and the sample variance using a
One alternative is to calculate the support interval range
variance and the sample kurtosis. For this purpose one can solve, in terms of the range
based on the sample , the equation
expressing the excess kurtosis in terms of the sample variance, and the sample size ν (see section titled "Kurtosis" and "Alternative parametrizations, four parameters"):
to obtain:
Another alternative is to calculate the support interval range []
skewness. For this purpose one can solve, in terms of the range
based on the sample variance and the sample , the equation expressing the squared
skewness in terms of the sample variance, and the sample size ν (see section titled "Skewness" and "Alternative
Beta distribution
199
parametrizations, four parameters"):
to obtain:[]
The remaining parameter can be determined from the sample mean and the previously obtained parameters: :
and finally, of course,
.
In the above formulas one may take, for example, as estimates of the sample moments:
The estimators G1 for sample skewness and G2 for sample kurtosis are used by DAP/SAS, PSPP/SPSS, and Excel. However, they are not used by BMDP and (according to []) they were not used by MINITAB in 1998. Actually, Joanes and Gill in their 1998 study[] concluded that the skewness and kurtosis estimators used in BMDP and in MINITAB (at that time) had smaller variance and mean-squared error in normal samples, but the skewness and kurtosis estimators used in DAP/SAS, PSPP/SPSS, namely G1 and G2, had smaller mean-squared error in samples from a very skewed distribution. It is for this reason that we have spelled out "sample skewness", etc., in the above formulas, to make it explicit that the user should choose the best estimator according to the problem at hand, as the best estimator for skewness and kurtosis depends on the amount of skewness (as shown by Joanes and Gill[]).
Maximum likelihood Two unknown parameters As it is also the case for maximum likelihood estimates for the gamma distribution, the maximum likelihood estimates for the beta distribution do not have a general closed form solution for arbitrary values of the shape parameters. If X1, ..., XN are independent random variables each having a beta distribution, the joint log likelihood function for N iid observations is: Max (Joint Log Likelihood/N) for Beta distribution Maxima at alpha=beta=2
Beta distribution
Max (Joint Log Likelihood/N) for Beta distribution Maxima at alpha=beta= {0.25,0.5,1,2,4,6,8}
Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:
where:
since the digamma function denoted ψ(α) is defined as the logarithmic derivative of the gamma function:[]
To ensure that the values with zero tangent slope are indeed a maximum (instead of a saddle-point or a minimum) one has to also satisfy the condition that the curvature is negative. This amounts to satisfying that the second partial derivative with respect to the shape parameters is negative
200
Beta distribution
201
using the previous equations, this is equivalent to:
where the trigamma function, denoted ψ1(α), is the second of the polygamma functions, and is defined as the derivative of the digamma function: . These conditions are equivalent to stating that the variances of the logarithmically transformed variables are positive, since:
Therefore the condition of negative curvature at a maximum is equivalent to the statements:
Alternatively, the condition of negative curvature at a maximum is also equivalent to stating that the following logarithmic derivatives of the geometric means GX and G(1−X) are positive, since:
While these slopes are indeed positive, the other slopes are negative:
The slopes of the mean and the median with respect to α and β display similar sign behavior. From the condition that at a maximum, the partial derivative with respect to the shape parameter equals zero, we obtain the following system of coupled maximum likelihood estimate equations (for the average log-likelihoods) that needs to be inverted to obtain the (unknown) shape parameter estimates in terms of the (known) average of logarithms of the samples X1, ..., XN:[]
where we recognize
as the logarithm of the sample geometric mean and
the sample geometric mean based on (1-X), the mirror-image of X. For
, it follows that
These coupled equations containing digamma functions of the shape parameter estimates numerical methods as done, for example, by Beckman et al.
[8]
as the logarithm of .
must be solved by
Gnanadesikan et al. give numerical solutions for a few
Beta distribution
202
cases.[9] N.L.Johnson and S.Kotz[] suggest that for "not too small" shape parameter estimates approximation to the digamma function
, the logarithmic
may be used to obtain initial values for an iterative solution, sin
equations resulting from this approximation can be solved exactly:
which leads to the following solution for the initial values (of the estimate shape parameters in terms of the sample geometric means) for an iterative solution:
Alternatively, the estimates provided by the method of moments can instead be used as initial values for an iterative solution of the maximum likelihood coupled equations in terms of the digamma functions. When the distribution is required over a known interval other than [0, 1] with random variable X, say [a, c] with random variable Y, then replace ln(Xi) in the first equation with equation with
and replace ln(1−Xi) in the second
(see "Alternative parametrizations, four parameters" section below).
If one of the shape parameters is known, the problem is considerably simplified. The following logit transformation can be used to solve for the unknown shape parameter (for skewed cases such that , otherwise, if symmetric, both -equal- parameters are known when one is known):
This logit transformation is the logarithm of the transformation that divides the variable X by its mirror-image (X/(1 X) resulting in the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) with support [0, +∞). As previously discussed in the section "Moments of logarithmically-transformed random variables," the logit transformation
, studied by Johnson,[] extends
the finite support [0, 1] based on the original variable X to infinite support in both directions of the real line (−∞, +∞). If, for example,
is known, the unknown parameter
can be obtained in terms of the inverse[] digamma function
of the right hand side of this equation:
In particular, if one of the shape parameters has a value of unity, for example for
(the power function
distribution with bounded support [0,1]), using the identity ψ(x + 1) = ψ(x) + 1/x in the equation , the maximum likelihood estimator for the unknown parameter is,[] exactly:
Beta distribution The beta has support [0, 1], therefore
203 , and hence
, and therefore
.
In conclusion, the maximum likelihood estimates of the shape parameters of a beta distribution are (in general) a complicated function of the sample geometric mean, and of the sample geometric mean based on (1−X), the mirror-image of X. One may ask, if the variance (in addition to the mean) is necessary to estimate two shape parameters with the method of moments, why is the (logarithmic or geometric) variance not necessary to estimate two shape parameters with the maximum likelihood method, for which only the geometric means suffice? The answer is because the mean does not provide as much information as the geometric mean. For a beta distribution with equal shape parameters α = β, the mean is exactly 1/2, regardless of the value of the shape parameters, and therefore regardless of the value of the statistical dispersion (the variance). On the other hand, the geometric mean of a beta distribution with equal shape parameters α = β, depends on the value of the shape parameters, and therefore it contains more information. Also, the geometric mean of a beta distribution does not satisfy the symmetry conditions satisfied by the mean, therefore, by employing both the geometric mean based on X and geometric mean based on (1−X), the maximum likelihood method is able to provide best estimates for both parameters α = β, without need of employing the variance. One can express the joint log likelihood per N iid observations in terms of the sufficient statistics (the sample geometric means) as follows:
We can plot the joint log likelihood per N observations for fixed values of the sample geometric means to see the behavior of the likelihood function as a function of the shape parameters α and β. In such a plot, the shape parameter estimators correspond to the maxima of the likelihood function. See the accompanying graph that shows that all the likelihood functions intersect at α = β = 1, which corresponds to the values of the shape parameters that give the maximum entropy (the maximum entropy occurs for shape parameters equal to unity: the uniform distribution). It is evident from the plot that the likelihood function gives sharp peaks for values of the shape parameter estimators close to zero, but that for values of the shape parameters estimators greater than one, the likelihood function becomes quite flat, with less defined peaks. Obviously, the maximum likelihood parameter estimation method for the beta distribution becomes less acceptable for larger values of the shape parameter estimators, as the uncertainty in the peak definition increases with the value of the shape parameter estimators. One can arrive at the same conclusion by noticing that the expression for the curvature of the likelihood function is in terms of the geometric variances
These variances (and therefore the curvatures) are much larger for small values of the shape parameter α and β. However, for shape parameter values α > 1, β > 1, the variances (and therefore the curvatures) flatten out. Equivalently, this result follows from the Cramér–Rao bound, since the Fisher information matrix components for the beta distribution are these logarithmic variances. The Cramér–Rao bound states that the variance of any unbiased estimator of α is bounded by the reciprocal of the Fisher information:
so the variance of the estimators increases with increasing α and β, as the logarithmic variances decrease. Also one can express the joint log likelihood per N iid observations in terms of the digamma function expressions for the logarithms of the sample geometric means as follows:
Beta distribution
this expression is identical to the negative of the cross-entropy (see section on "Quantities of information (entropy)"). Therefore, finding the maximum of the joint log likelihood of the shape parameters, per N iid observations, is identical to finding the minimum of the cross-entropy for the beta distribution, as a function of the shape parameters.
with the cross-entropy defined as follows:
Four unknown parameters The procedure is similar to the one followed in the two unknown parameter case. If Y1, ..., YN are independent random variables each having a beta distribution with four parameters, the joint log likelihood function for N iid observations is:
Finding the maximum with respect to a shape parameter involves taking the partial derivative with respect to the shape parameter and setting the expression equal to zero yielding the maximum likelihood estimator of the shape parameters:
these equations can be re-arranged as the following system of four coupled equations (the first two equations are geometric means and the second two equations are the harmonic means) in terms of the maximum likelihood estimates for the four parameters :
204
Beta distribution
205
with sample geometric means:
The parameters
are embedded inside the geometric mean expressions in a nonlinear way (to the power 1/N).
This precludes, in general, a closed form solution, even for an initial value approximation for iteration purposes. One alternative is to use as initial values for iteration the values obtained from the method of moments solution for the four parameter case. Furthermore, the expressions for the harmonic means are well-defined only for , which precludes a maximum likelihood solution for shape parameters less than unity in the four-parameter case. Fisher's information matrix for the four parameter case is positive-definite only for α, β > 2 (for further discussion, see section on Fisher information matrix, four parameter case), for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. The following Fisher information components (that represent the expectations of the curvature of the log likelihood function) have singularities at the following values:
(for further discussion see section on Fisher information matrix). Thus, it is not possible to strictly carry on the maximum likelihood estimation for some well known distributions belonging to the four-parameter beta distribution family, like the uniform distribution (Beta(1, 1, a, c)), and the arcsine distribution (Beta(1/2, 1/2, a, c)). N.L.Johnson and S.Kotz[] ignore the equations for the harmonic means and instead suggest "If a and c are unknown, and maximum likelihood estimators of a, c, α and β are required, the above procedure (for the two unknown parameter case, with X transformed as X = (Y−a)/(c−a)) can be repeated using a succession of trial values of a and c, until the pair (a, c) for which maximum likelihood (given a and c) is as great as possible, is attained" (where, for the purpose of clarity, their notation for the parameters has been translated into the present notation).
Beta distribution
206
Fisher information matrix Let a random variable X have a probability density f(x;α). The partial derivative with respect to the (unknown, and to be estimated) parameter α of the log likelihood function is called the score. The second moment of the score is called the Fisher information:
The expectation of the score is zero, therefore the Fisher information is also the second moment centered around the mean of the score: the variance of the score. If the log likelihood function is twice differentiable with respect to the parameter α, and under certain regularity conditions,[] then the Fisher information may also be written as follows (which is often a more convenient form for calculation purposes):
Thus, the Fisher information is the negative of the expectation of the second derivative with respect to the parameter α of the log likelihood function. Therefore Fisher information is a measure of the curvature of the log likelihood function of α. A low curvature (and therefore high radius of curvature), flatter log likelihood function curve has low Fisher information; while a log likelihood function curve with large curvature (and therefore low radius of curvature) has high Fisher information. When the Fisher information matrix is computed at the evaluates of the parameters ("the observed Fisher information matrix") it is equivalent to the replacement of the true log likelihood surface by a Taylor's series approximation, taken as far as the quadratic terms.[] The word information, in the context of Fisher information, refers to information about the parameters. Information such as: estimation, sufficiency and properties of variances of estimators. The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any estimator of a parameter α:
The precision to which one can estimate the estimator of a parameter α is limited by the Fisher Information of the log likelihood function. The Fisher information is a measure of the minimum error involved in estimating a parameter of a distribution and it can be viewed as a measure of the resolving power of an experiment needed to discriminate between two alternative hypothesis of a parameter.[] When there are N parameters
, then the Fisher information takes the form of an NxN positive
semidefinite symmetric matrix, the Fisher Information Matrix, with typical element:
Under certain regularity conditions,[] the Fisher Information Matrix may also be written in the following form, which is often more convenient for computation:
With X1, ..., XN iid random variables, an N-dimensional "box" can be constructed with sides X1, ..., XN. Costa and Cover[] show that the (Shannon) differential entropy h(X) is related to the volume of the typical set (having the sample entropy close to the true entropy), while the Fisher information is related to the surface of this typical set.
Beta distribution
207
Two parameters For X1, ..., XN independent random variables each having a beta distribution parametrized with shape parameters α and β, the joint log likelihood function for N iid observations is:
therefore the joint log likelihood function per N iid observations is:
For the two parameter case, the Fisher information has 4 components: 2 diagonal and 2 off-diagonal. Since the Fisher information matrix is symmetric, one of these off diagonal components is independent. Therefore the Fisher information matrix has 3 independent components (2 diagonal and 1 off diagonal). Aryal and Nadarajah[] calculated Fisher's information matrix for the four parameter case, from which the two parameter case can be obtained as follows:
Since the Fisher information matrix is symmetric
The Fisher information components are equal to the log geometric variances and log geometric covariance. Therefore they can be expressed as trigamma functions, denoted ψ1(α), the second of the polygamma functions, defined as the derivative of the digamma function: . These derivatives are also derived in the section titled "Parameter estimation", "Maximum likelihood", "Two unknown parameters," and plots of the log likelihood function are also shown in that section. The section titled "Geometric variance and covariance" contains plots and further discussion of the Fisher information matrix components: the log geometric variances and log geometric covariance as a function of the shape parameters α and β. The section titled "Other moments", "Moments of transformed random variables", "Moments of logarithmically-transformed random variables" contains formulas for moments of logarithmically-transformed random variables. Images for the Fisher information components and are shown in the section titled "Geometric variance". The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components of the Fisher information matrix, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution is:
Beta distribution
208
From Sylvester's criterion (checking whether the diagonal elements are all positive), it follows that the Fisher information matrix for the two parameter case is positive-definite (under the standard condition that the shape parameters are positive α>0 and β>0). Four parameters If Y1, ..., YN are independent random variables each having a beta distribution with four parameters: the exponents α and β, as well as "a" (the minimum of the distribution range), and "c" (the maximum of the distribution range) (section titled "Alternative parametrizations", "Four parameters"), with probability density function:
Fisher Information I(a,a) for α=β vs range (c-a) and exponent α=β
Fisher Information I(α,a) for α=β, vs. range (c a) and exponent α=β
the joint log likelihood function per N iid observations is:
For the four parameter case, the Fisher information has 4*4=16 components. It has 12 off-diagonal components = (4*4 total - 4 diagonal). Since the Fisher information matrix is symmetric, half of these components (12/2=6) are independent. Therefore the Fisher information matrix has 6 independent off-diagonal + 4 diagonal = 10 independent components. Aryal and Nadarajah[] calculated Fisher's information matrix for the four parameter case as follows:
In the above expressions, the use of X instead of Y in the expressions var[ln(X)] = ln(varGX) is not an error. The expressions in terms of the log geometric variances and log geometric covariance occur as functions of the two parameter X ~ Beta(α, β) parametrization because when taking the partial derivatives with respect to the exponents (α, β) in the four parameter case, one obtains the identical expressions as for the two parameter case: these terms of
Beta distribution
209
the four parameter Fisher information matrix are independent of the minimum "a" and maximum "c" of the distribution's range. The only non-zero term upon double differentiation of the log likelihood function with respect to the exponents α and β is the second derivative of the log of the beta function: ln(B(α, β)). This term is independent of the minimum "a" and maximum "c" of the distribution's range. Double differentiation of this term results in trigamma functions. The sections titled "Maximum likelihood", "Two unknown parameters" and "Four unknown parameters" also show this fact. The Fisher information for N i.i.d. samples is N times the individual Fisher information (eq. 11.279, page 394 of Cover and Thomas[]). (Aryal and Nadarajah[] take a single observation, N = 1, to calculate the following components of the Fisher information, which leads to the same result as considering the derivatives of the log likelihood per N observations. Moreover below the erroneous expression for in Aryal and Nadarajah has been corrected.)
The lower two diagonal entries of the Fisher information matrix, with respect to the parameter "a" (the minimum of the distribution's range): , and with respect to the parameter "c" (the maximum of the distribution's range): are only defined for exponents α > 2 and β > 2 respectively. The Fisher information matrix component
for the
minimum "a" approaches infinity for exponent α approaching 2 from above, and the Fisher information matrix component for the maximum "c" approaches infinity for exponent β approaching 2 from above. The Fisher information matrix for the four parameter case does not depend on the individual values of the minimum "a" and the maximum "c", but only on the total range (c−a). Moreover, the components of the Fisher information matrix that depend on the range (c−a), depend only through its inverse (or the square of the inverse), such that the Fisher information decreases for increasing range (c−a). The accompanying images show the Fisher information components information components
and
and
. Images for the Fisher
are shown in the section titled "Geometric variance". All these Fisher
information components look like a basin, with the "walls" of the basin being located at low values of the parameters. The following four-parameter-beta-distribution Fisher information components can be expressed in terms of the two-parameter: X ~ Beta(α, β) expectations of the transformed ratio ((1-X)/X) and of its mirror image (X/(1-X)), scaled by the range (c−a), which may be helpful for interpretation:
Beta distribution
210
These are also the expected values of the "inverted beta distribution" or beta prime distribution (also known as beta distribution of the second kind or Pearson's Type VI) [] and its mirror image, scaled by the range (c−a). Also, the following Fisher information components can be expressed in terms of the harmonic (1/X) variances or of variances based on the ratio transformed variables ((1-X)/X) as follows:
See section "Moments of linearly-transformed, product and inverted random variables" for these expectations. The determinant of Fisher's information matrix is of interest (for example for the calculation of Jeffreys prior probability). From the expressions for the individual components, it follows that the determinant of Fisher's (symmetric) information matrix for the beta distribution with four parameters is:
Using Sylvester's criterion (checking whether the diagonal elements are all positive), and since diagonal components and have singularities at α=2 and β=2 it follows that the Fisher information matrix for the four parameter case is positive-definite for α>2 and β>2. Since for α > 2 and β > 2 the beta distribution is (symmetric or unsymmetric) bell shaped, it follows that the Fisher information matrix is positive-definite only for bell-shaped (symmetric or unsymmetric) beta distributions, with inflection points located to either side of the mode. Thus, important well known distributions belonging to the four-parameter beta distribution family, like the parabolic distribution (Beta(2,2,a,c)) and the uniform distribution (Beta(1,1,a,c)) have Fisher information components ( ) that blow up (approach infinity) in the four-parameter case (although their Fisher information components are all defined for the two parameter case). The four-parameter Wigner semicircle distribution (Beta(3/2,3/2,a,c)) and arcsine distribution (Beta(1/2,1/2,a,c)) have negative Fisher information determinants for the four-parameter case.
Generating beta-distributed random variates If X and Y are independent, with
and
then
, so one algorithm
for generating beta variates is to generate X/(X + Y), where X is a gamma variate with parameters (α, 1) and Y is an independent gamma variate with parameters (β, 1).[10] Also, the kth order statistic of n uniformly distributed variates is , so an alternative if α and β are small integers is to generate α + β − 1 uniform variates and choose the α-th largest.[11]
Beta distribution
211
Related distributions Transformations • If X ~ Beta(α, β) then 1−X ~ Beta(β, α) mirror-image symmetry • If X ~ Beta(α, β) then
. The beta prime distribution, also called "beta distribution of the
second kind". • If X ~ Beta(n/2, m/2) then
(assuming n > 0 and m > 0). The Fisher-Snedecor F distribution
• If
then min+X(max−min) ~ PERT(min, max, m, λ) where
PERT denotes a distribution used in PERT analysis, and m=most likely value.[12] Traditionally[] λ = 4 in PERT analysis. • If X ~ Beta(1, β) then X ~ Kumaraswamy distribution with parameters (1, β) • If X ~ Beta(α, 1) then X ~ Kumaraswamy distribution with parameters (α, 1) • If X ~ Beta(α, 1) then −ln(X) ~ Exponential(α)
Special and limiting cases • Beta(1, 1) ~ U(0, 1). • If X ~ Beta(3/2, 3/2) and r > 0 then 2rX−r ~ Wigner semicircle distribution. • Beta(1/2, 1/2) is equivalent to the arcsine distribution. This distribution is also Jeffreys prior probability for the Bernoulli and binomial distributions . The arcsine probability density is a distribution that appears in several random walk fundamental theorems. In a fair coin toss random walk, the probability for the time of the last visit to the origin is distributed as an (U-shaped) arcsine distribution.[][] In a two-player fair-coin-toss game, a player is said to be in the lead if the random walk (that started at the origin) is above the origin. The most probable number of times that a given player will be in the lead, in a game of length 2N, is not N. On the contrary, N is the least likely number of times that the player will be in the lead. The most likely number of times in the lead is 0 or 2N (following the arcsine distribution). •
Example of eight realizations of a random walk in one dimension starting at 0: the probability for the time of the last visit to the origin is distributed as Beta(1/2, 1/2)
the exponential distribution
•
the gamma distribution
Derived from other distributions • The kth order statistic of a sample of size n from the uniform distribution is a beta random variable, U(k) ~ Beta(k, n+1−k).[11] • If X ~ Gamma(α, θ) and Y ~ Gamma(β, θ) then
• If
and
then
• If X ~ U(0, 1) and α > 0 then X1/α ~ Beta(α, 1). The power function distribution.
Beta(1/2, 1/2): The arcsine distribution probability density was proposed by Harold Jeffreys to represent uncertainty for a Bernoulli or a binomial distribution in Bayesian inference, and is now commonly referred to as Jeffreys prior: p−1/2(1−p)−1/2. This distribution also appears in several random walk fundamental theorems
Beta distribution
212
Combination with other distributions • X ~ Beta(α, β) and Y ~ F(2α, 2β) then
for all x > 0.
Compounding with other distributions • If p ~ Beta(α, β) and X ~ Bin(k, p) then X ~ beta-binomial distribution • If p ~ Beta(α, β) and X ~ NB(r, p) then X ~ beta negative binomial distribution
Generalisations • The Dirichlet distribution is a multivariate generalization of the beta distribution. Univariate marginals of the Dirichlet distribution have a beta distribution. The beta distribution is conjugate to the binomial and Bernoulli distributions in exactly the same way as the Dirichlet distribution is conjugate to the multinomial distribution and categorical distribution. • The Pearson type I distribution is identical to the beta distribution (except for arbitrary shifting and re-scaling that can also be accomplished with the four parameter parametrization of the beta distribution). •
the noncentral beta distribution
Applications Order statistics The beta distribution has an important application in the theory of order statistics. A basic result is that the distribution of the kth smallest of a sample of size n from a continuous uniform distribution has a beta distribution.[11] This result is summarized as:
From this, and application of the theory related to the probability integral transform, the distribution of any individual order statistic from any continuous distribution can be derived.[11]
Rule of succession A classic application of the beta distribution is the rule of succession, introduced in the 18th century by Pierre-Simon Laplace[] in the course of treating the sunrise problem. It states that, given s successes in n conditionally independent Bernoulli trials with probability p, that the estimate of the expected value in the next trial is
. This estimate is
the expected value of the posterior distribution over p, namely Beta(s+1, n−s+1), which is given by Bayes' rule if one assumes a uniform prior probability over p (i.e., Beta(1, 1)) and then observes that p generated s successes in n trials. Laplace's rule of succession has been criticized by prominent scientists. R. T. Cox described Laplace's application of the rule of succession to the sunrise problem ([] p. 89) as "a travesty of the proper use of the principle." Keynes remarks ([] Ch.XXX, p. 382) "indeed this is so foolish a theorem that to entertain it is discreditable." Karl Pearson[] showed that the probability that the next (n + 1) trials will be successes, after n successes in n trials, is only 50%, which has been considered too low by scientists like Jeffreys and unacceptable as a representation of the scientific process of experimentation to test a proposed scientific law. As pointed out by Jeffreys ([] p. 128) (crediting C. D. Broad[] ) Laplace's rule of succession establishes a high probability of success ((n+1)/(n+2)) in the next trial, but only a moderate probability (50%) that a further sample (n+1) comparable in size will be equally successful. As pointed out by Perks,[] "The rule of succession itself is hard to accept. It assigns a probability to the next trial which implies the assumption that the actual run observed is an average run and that we are always at the end of an average run. It would, one would think, be more reasonable to assume that we were in the middle of an average run. Clearly a higher value for both probabilities is necessary if they are to accord with reasonable belief." These problems with
Beta distribution
213
Laplace's rule of succession motivated Haldane, Perks, Jeffreys and others to search for other forms of prior probability (see the next section titled "Bayesian inference"). According to Jaynes,[] the main problem with the rule of succession is that it is not valid when s=0 or s=n (see rule of succession, for an analysis of its validity).
Bayesian inference The use of Beta distributions in Bayesian inference is due to the fact that they provide a family of conjugate prior probability distributions for binomial (including Bernoulli) and geometric distributions. The domain of the beta distribution can be viewed as a probability, and in fact the beta distribution is often used to describe the distribution of a probability value p:[]
. Examples of
beta distributions used as prior probabilities to represent ignorance of prior parameter values in Bayesian inference are Beta(1,1), Beta(0,0) and Beta(1/2,1/2). Bayes' prior probability ( Beta(1,1) )
: The uniform distribution probability density was proposed by Thomas Bayes to represent ignorance of prior probabilities in Bayesian inference. It describes not a state of complete ignorance, but the state of knowledge in which we have observed at least one success and one failure, and therefore we have prior knowledge that both states are physically possible.
The beta distribution achieves maximum differential entropy for Beta(1,1): the uniform probability density, for which all values in the domain of the distribution have equal density. This uniform distribution Beta(1,1) was suggested ("with a great deal of doubt") by Thomas Bayes[] as the prior probability distribution to express ignorance about the correct prior distribution. This prior distribution was adopted (apparently, from his writings, with little sign of doubt[]) by Pierre-Simon Laplace, and hence it was also known as the "Bayes-Laplace rule" or the "Laplace rule" of "inverse probability" in publications of the first half of the 20th century. In the later part of the 19th century and early part of the 20th century, scientists realized that the assumption of uniform "equal" probability density depended on the actual functions (for example whether a linear or a logarithmic scale was most appropriate) and parametrizations used. In particular, the behavior near the ends of distributions with finite support (for example near x=0, for a distribution with initial support at x=0) required particular attention. Keynes (,[] Ch.XXX, p. 381) criticized the use of Bayes's uniform prior probability (Beta(1,1)) that all values between zero and one are equiprobable, as follows: "Thus experience, if it shows anything, shows that there is a very marked clustering of statistical ratios in the neighborhoods of zero and unity, of those for positive theories and for correlations between positive qualities in the neighborhood of zero, and of those for negative theories and for correlations between negative qualities in the neighborhood of unity. "
Beta distribution
214
Haldane's prior probability ( Beta(0,0) ) The Beta(0,0) distribution was proposed by J.B.S. Haldane,[13] who suggested that the prior probability representing complete uncertainty should be proportional to p−1(1−p)−1. The function p−1(1−p)−1 can be viewed as the limit of the numerator of the beta distribution as both shape parameters approach zero: α, β → 0. The Beta function (in the denominator of the beta distribution) approaches infinity, for both parameters approaching zero, α, β → 0. Therefore p−1(1−p)−1 divided by the Beta function approaches a 2 point Bernoulli distribution with equal probability 1/2 at each Dirac delta function end, at 0 and 1, and Beta(0, 0) The Haldane prior probability nothing in between, as α, β → 0. A coin-toss: one face of the coin expressing total ignorance about prior being at 0 and the other face being at 1. The Haldane prior probability information, where we are not even sure whether distribution Beta(0, 0) is an "improper prior" because its integration it is physically possible for an experiment to yield either a success or a failure. A beta distribution (from 0 to 1) fails to strictly converge to 1 due to the Dirac delta approaches, as α, β → 0, a two point Bernoulli function singularities at each end. However, this is not an issue for distribution with all probability density computing posterior probabilities unless the sample size is very small. concentrated at each Dirac delta function end, at 0 Furthermore, Zellner[] points out that on the log-odds scale, (the logit and 1, and nothing in between. A coin-toss: one face of the coin being at 0 and the other face transformation ln(p/1−p)), the Haldane prior is the uniformly flat prior. being at 1. The fact that a uniform prior probability on the logit transformed variable ln(p/1−p) (with domain (-∞, ∞)) is equivalent to the Haldane prior on the domain [0, 1] was pointed out by Harold Jeffreys in the first edition (1939) of his book Theory of Probability ([] p. 123). Jeffreys writes "Certainly if we take the Bayes-Laplace rule right up to the extremes we are led to results that do not correspond to anybody's way of thinking. The (Haldane) rule dx/(x(1−x)) goes too far the other way. It would lead to the conclusion that if a sample is of one type with respect to some property there is a probability 1 that the whole population is of that type." The fact that "uniform" depends on the parametrization, led Jeffreys to seek a form of prior that would be invariant under different parametrizations.
Jeffreys' prior probability (Beta(1/2, 1/2) for a Bernoulli or for a binomial distribution) Harold Jeffreys[][] proposed to use an uninformative prior probability measure that should be invariant under reparameterization: proportional to the square root of the determinant of Fisher's information matrix. For the Bernoulli distribution, this can be shown as follows: for a coin that is "heads" with probability p ∈ [0, 1] and is "tails" with probability 1−p, for a given (H,T) ∈ {(0,1), (1,0)} the probability is pH(1−p)T. Since T = 1−H, the Bernoulli distribution is pH(1−p)1−H. Considering p as the only parameter, it follows that the log likelihood for the Bernoulli distribution is
Jeffreys prior probability for the beta distribution: the square root of the determinant of Fisher's information matrix: is a function of the trigamma function ψ1 of shape parameters α, β
The Fisher information matrix has only one component (it is a scalar, because there is only one parameter: p), therefore:
Beta distribution
Posterior Beta densities with samples having success=“s”, failure=“f” of s/(s+f)=1/2, and s+f={3,10,50}, based on 3 different prior probability functions: Haldane (Beta(0, 0), Jeffreys (Beta(1/2, 1/2)) and Bayes (Beta(1, 1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near p=1/2). Significant differences appear for very small sample sizes (the flatter distribution for sample size of 3)
Posterior Beta densities with samples having success=“s”, failure=“f” of s/(s+f)=1/4, and s+f={3,10,50}, based on 3 different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 50 (with more pronounced peak near p=1/4). Significant differences appear for very small sample sizes (the very skewed distribution for the degenerate case of sample size=3, in this degenerate and unlikely case the Haldane prior results in a reverse "J" shape with mode at p=0 instead of p=1/4. If there is sufficient sampling data, the three priors of Bayes (Beta(1,1)), Jeffreys (Beta(1/2,1/2)) and Haldane (Beta(0,0)) should yield similar posterior probability densities.
215
Beta distribution
216
Posterior Beta densities with samples having success=“s”, failure=“f” of s/(s+f)=1/4, and s+f={4,12,40}, based on 3 different prior probability functions: Haldane (Beta(0,0), Jeffreys (Beta(1/2,1/2)) and Bayes (Beta(1,1)). The image shows that there is little difference between the priors for the posterior with sample size of 40 (with more pronounced peak near p=1/4). Significant differences appear for very small sample sizes
Similarly, for the Binomial distribution with n Bernoulli trials, it can be shown that
Thus, for the Bernoulli, and Binomial distributions, Jeffreys prior is proportional to
, which happens to
be proportional to a beta distribution with domain variable x = p, and shape parameters α = β = 1/2, the arcsine distribution Beta(1/2, 1/2) =
.
It will be shown in the next section that the normalizing constant for Jeffreys prior is immaterial to the final result because the normalizing constant cancels out in Bayes theorem for the posterior probability. Hence Beta(1/2,1/2) is used as the Jeffreys prior for both Bernoulli and binomial distributions. As shown in the next section, when using this expression as a prior probability times the likelihood in Bayes theorem, the posterior probability turns out to be a beta distribution. It is important to realize, however, that Jeffreys prior is proportional to for the Bernoulli and binomial distribution, but not for the beta distribution. Jeffreys prior for the beta distribution is given by the determinant of Fisher's information for the beta distribution, which, as shown in the section titled "Fisher information" is a function of the trigamma function ψ1 of shape parameters α and β as follows:
Beta distribution
217
As previously discussed, Jeffreys prior for the Bernoulli and binomial distributions is proportional to the arcsine distribution Beta(1/2,1/2), a one-dimensional curve that looks like a basin as a function of the parameter p of the Bernoulli and binomial distributions. The walls of the basin are formed by p approaching the singularities at the ends p → 0 and p → 1, where Beta(1/2,1/2) approaches infinity. Jeffreys prior for the beta distribution is a two-dimensional surface (embedded in a three dimensional space) that looks like a basin with only two of its walls meeting at the corner α = β = 0 (and missing the other two walls) as a function of the shape parameters α and β of the beta distribution. The two adjoining walls of this two-dimensional surface are formed by the shape parameters α and β approaching the singularities (of the trigamma function) at α, β → 0. It has no walls for α, β → ∞ because in this case the determinant of Fisher's information matrix for the beta distribution approaches zero. It will be shown in the next section that Jeffreys prior probability results in posterior probabilities (when multiplied by the binomial likelihood function) that are intermediate between the posterior probability results of the Haldane and Bayes prior probabilities. Jeffreys prior may be difficult to obtain analytically, and for some cases it just doesn't exist (even for simple distribution functions like the asymmetric triangular distribution). Berger, Bernardo and Sun, in a 2009 paper[] defined a reference prior probability distribution that (unlike Jeffreys prior) exists for the asymmetric triangular distribution. They cannot obtain a closed-form expression for their reference prior, but numerical calculations show it to be "nearly perfectly fitted by the (proper) prior Beta(1/2, 1/2) ∝
" where θ is the vertex variable for
the asymmetric triangular distribution with support [0, 1] (corresponding to the following parameter values in Wikipedia's article on the triangular distribution: vertex c=θ, left end a=0,and right end b=1). Berger et al. also give a heuristic argument that Beta(1/2, 1/2) could indeed be the exact Berger-Bernardo-Sun reference prior for the asymmetric triangular distribution. Therefore, Beta(1/2, 1/2) not only is Jeffreys prior for the Bernoulli and binomial distributions, but also seems to be the Berger-Bernardo-Sun reference prior for the asymmetric triangular distribution (for which the Jeffreys prior does not exist), a distribution used in project management and PERT analysis to describe the cost and duration of project tasks. Clarke and Barron[14] prove that, among continuous positive priors, Jeffreys prior (when it exists) asymptotically maximizes Shannon's mutual information between a sample of size n and the parameter, and therefore Jeffreys prior is the most uninformative prior (measuring information as Shannon information). The proof rests on an examination of the Kullback-Leibler distance between probability density functions for iid random variables. Effect of different prior probability choices on the posterior beta distribution If samples are drawn from the population of a random variable X that result in s successes and f failures in "n" Bernoulli trials n=s+f, then the likelihood function for parameters s and f given x=p (the notation x=p in the expressions below will emphasize that the domain x stands for the value of the parameter p in the binomial distribution), is the following binomial distribution:
If beliefs about prior probability information are reasonably well approximated by a beta distribution with parameters αPrior and βPrior, then:
Beta distribution
218
According to Bayes' theorem for a continuous event space, the posterior probability is given by the product of the prior probability and the likelihood function (given the evidence s and f=n-s), normalized so that the area under the curve equals one, as follows:
The binomial coefficient
appears both in the numerator and the
denominator of the posterior probability, and it does not depend on the integration variable x, hence it cancels out, and it is irrelevant to the final result. Similarly the normalizing factor for the prior probability, the beta function B(αPrior,βPrior) cancels out and it is immaterial to the final result. The same posterior probability result can be obtained if one uses an un-normalized prior because the normalizing factors all cancel out. Several authors (including Jeffreys himself) thus use an un-normalized prior formula since the normalization constant cancels out. The numerator of the posterior probability ends up being just the (un-normalized) product of the prior probability and the likelihood function, and the denominator is its integral from zero to one. The beta function in the denominator, B(s + αPrior,n - s + βPrior), appears as a normalization constant to ensure that the total posterior probability integrates to unity. The ratio s/n of the number of successes to the total number of trials is a sufficient statistic in the binomial case, which is relevant for the following results. For the Bayes' prior probability (Beta(1,1)), the posterior probability is:
For the Jeffreys' prior probability (Beta(1/2,1/2)), the posterior probability is:
and for the Haldane prior probability (Beta(0,0)), the posterior probability is:
From the above expressions it follows that for (s/n)=(1/2) all the above three prior probabilities result in the identical location for the posterior probability mean=mode=1/2. For (s/n) mean for Jeffreys prior > mean for Haldane prior. For (s/n)>(1/2) the order of these inequalities is reversed such that the Haldane prior probability results in the largest posterior mean. The Haldane prior probability Beta(0,0) results in a posterior probability density with mean (the expected value for the probability of success in the "next" trial) identical to the ratio s/n of the number of successes to the total number of trials. Therefore the Haldane prior results in a posterior probability with expected value in the next trial equal to the maximum likelihood. The Bayes prior probability Beta(1,1) results in a posterior probability density with mode identical to the ratio s/n (the maximum likelihood). In the case that 100% of the trials have been successful (s=n), the Bayes prior probability Beta(1,1) results in a posterior expected value equal to the rule of succession (n+1)/(n+2), while the Haldane prior Beta(0,0) results in a posterior expected value of 1 (absolute certainty of success in the next trial). Jeffreys prior probability results in a posterior expected value equal to (n + 1/2)/(n+1), Perks[] (p. 303) points out: "This provides a new rule of succession
Beta distribution and expresses a 'reasonable' position to take up, namely, that after an unbroken run of n successes we assume a probability for the next trial equivalent to the assumption that we are about half-way through an average run, i.e. that we expect a failure once in (2n + 2) trials. The Bayes-Laplace rule implies that we are about at the end of an average run or that we expect a failure once in (n + 2) trials. The comparison clearly favours the new result (what is now called Jeffreys prior) from the point of view of 'reasonableness'." Conversely, in the case that 100% of the trials have resulted in failure (s=0), the Bayes prior probability Beta(1,1) results in a posterior expected value for success in the next trial equal to 1/(n+2), while the Haldane prior Beta(0,0) results in a posterior expected value of success in the next trial of 0 (absolute certainty of failure in the next trial). Jeffreys prior probability results in a posterior expected value for success in the next trial equal to (1/2)/(n+1), which Perks[] (p. 303) points out: "is a much more reasonably remote result than the Bayes-Laplace result 1/(n + 2)". Jaynes[] questions (for the uniform prior Beta(1,1)) the use of these formulas for the cases s=0 or s=n because the integrals do not converge (Beta(1,1) is an improper prior for s=0 or s=n). In practice, the conditions 0
View more...
Comments