KBS Document.pdf
Short Description
Download KBS Document.pdf...
Description
Assignment Brief BTEC Level 4-5 HNC/HND Diploma (QCF) To be filled by the Learner Name of the Learner
:
Edexcel No
:
Centre No :
Batch:
Date of Submission : Unit Assessment Information Qualification
: Higher National Diploma in Computing and Systems Development
Unit Code & Title
: H/601/1991– Unit 31 – Knowledge Based Systems
Assessment Title & No’s : Learning outcomes and grading opportunities: LO 01: Understand knowledge-based systems Learning
LO1.1
Outcomes LO 02: Be able to design knowledge-based applications Learning
LO2.1
LO2.2
LO2.3
Outcomes LO 03: 3 Be able to develop knowledge-based applications Learning
LO3.1
LO3.2
LO3.3
LO3.4
LO3.5
Outcomes Assessor
:
Internal Examiner (IE)
Date Reviewed :
Date of IE
:
MeritIssued and Distinction Descriptor Date :
Date Due
:
M1
M2
M3
D1
D2
:
D3
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 1 of 55
Statement of Originality and Student Declaration
I hereby, declare that I know what plagiarism entails, namely to use another’s work and to present it as my own without attributing the sources in the correct way. I further understand what it means to copy another’s work. 1. I know that plagiarism is a punishable offence because it constitutes theft. 2. I understand the plagiarism and copying policy of the Edexcel UK. 3. I know what the consequences will be if I plagiaries or copy another’s work in any of the assignments for this program. 4. I declare therefore that all work presented by me for every aspects of my program, will be my own, and where I have made use of another’s work, I will attribute the source in the correct way. 5. I acknowledge that the attachment of this document signed or not, constitutes a binding agreement between myself and Edexcel UK. 6. I understand that my assignment will not be considered as submitted if this document is not attached to the attached.
Student’s Signature: ……………………………
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Date:.………………
Page 2 of 55
Task 1 Internet games have become very popular. Designing a good computer game needs to use 3D graphics and artificial intelligence technologies. Computers are programmed to play chess, scrabble, and even crossword puzzles (American Scientist, September/October 1999). They are getting better and better; in fact, a computer beat the worlds’ number-one chess grand master, Garry Kasparov. 1.1. Do you agree that computer games with artificial intelligence technologies (Expert systems in gaming) exhibit intelligence? Why or why not? (LO.1.1 part ii) 1.2. Describe how such computer systems perform inference. (LO.1.1 part iii) 1.3. Do you agree that using speech communication as the user interface could increase willingness to use expert systems in gaming? Why or why not? (LO.1.1 part ii) 1.4. Proponents of AI claim that we will never have machines that truly think because they cannot, by definition, have a soul. Supporters claim a soul is unnecessary. They cite the fact that argue humanity originally set out to create an artificial bird for flight. Instead it eventually created an airplane which is not a bird, but functionally acts as one. Debate the issue. (LO.1.1 part iii) Task 2 Consider the decision-making situation defined by the following rules:
If it is a nice day and it is summer, then I go to the beach If it is a nice day and it is winter, then I go to the canal boating resort If it is not a nice day and it is summer, then I go to work If it is not a nice day and it is winter, then I go to class If I go to the beach, then I swim. If I go to the canal boating resort , then I go boat riding If I go boat riding or I swim, then I have fun. If I go to work, then I make money. If I go to class, then I learn something.
Follow the rules for the following situations (what do you conclude for each one?):
It is a nice day and it is summer. It is not a nice day and it is winter. It is a nice day and it is winter. It is not a nice day and it is summer.
2.1. Are there any other combinations that are valid? Explain. (LO.1.1 part i , LO.1.1 part iv) 2.2. What needs to happen for you to “learn something” in this knowledge universe? Start with the conclusion “learn something” and identify the rules used (backward) to get to the needed facts. (LO.1.1) 2.3. Encode the knowledge into a graphical diagram (Semantic network). (LO.2.1)
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 3 of 55
2.4. Write a prolog program or other third-generation language program. Use IF- THEN (ELSE) statements in your implementation. (LO.2.2), (LO.3.1) 2.5. Explain how hard would it be to modify the program to insert new facts and a rule such as: If it is cloudy and it is warm and it is not raining and it is summer then I swim (LO.3.2), (LO.3.3)
Task 3 All animals have skin. Fish is one kind of animal, birds are another type and mammals are a third kind. Normally fish has gills and can swim, while birds have wings and can fly. While fish and birds usually lay eggs, mammals do not. Although sharks are fish, they do not lay eggs. They are very dangerous. Salmon is another fish and is considered a delicacy. Canary is a bird and is yellow. Ostrich is a bird, which is very tall, but cannot fly, only walk. 3.1. Represent the above facts and rules in First Order Logic expressions. (LO.2.1) 3.2. Write a prolog program or other third-generation language program and knowledge base to execute this knowledge. (LO.3.1), (LO.2.3) 3.3. Using your program answer the following questions (LO.3.2) Can canaries fly? What is the color of canaries? Can ostriches fly? Do canaries have skin? Are sharks dangerous? 3.4. Complete the above system test using proper test cases and provide all the test documents and error handlings. (LO.3.4) (LO.3.3) 3.5. Prepare a user document to illustrate how to work with your implemented system. (LO.3.5)
Observation Sheet Activity Activity No design knowledge base, rules and structure 1
Learning Outcome (LO) Task 3.2
of the application
Feedback (Pass/ Redo)
Task 2.4
2
Insert sample facts to the knowledgebase
Task 3.2
3
Write, Run and test the programs in Task
Task 2.4
2.4 and 3.2
Task 3.2
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Date
Page 4 of 55
4
Run and test all rules written in Task 3.2
Task 3.4
5
Answer the question given in task 3.3 by using the program
Task 2.1 Task 2.2 Task 3.3
Comments:
Assessor Name
:…………………………………………….
Assessor Signature
:…………………………………………….
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 5 of 55
Outcomes/Criteria for PASS
Possible evidence
Page
Feedback
LO1- Understand knowledge-based systems 1.1 Analyse a real-world knowledge-based system, detailing: I. data, rules and structure II. how the knowledge is managed III. how artificial intelligence traits are incorporated into the system IV. how an expert system is created from utilizing the knowledge base and including AI traits
Task 1.1 Task 1.2 Task 1.3 Task 1.4 Task 2.1 Task 2.2
LO2 - Be able to design knowledge- based applications 2.1 plan the design of an application using an Task 2.3 Task 1.1 AI development language 2.2 identify the screen components and data Task 2.4 and file structures required to implement a given design 2.3 design knowledge base, rules and structure Task 3.2 of the application
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 6 of 55
LO3 - Be able to test and document relational database systems 3.1 implement the application
Task 2.4 Task 3.2
3.2 implement data validation for inputs
Task 2.5 Task 3.3
3.3 identify and implement opportunities for error handling and reporting.
Task 2.5 Task 3.4
3.4 design and implement a test strategy
Task 3.4
3.5 create documentation to support users
Task 3.5
Grade Descriptor for MERIT M1
M1.2
M2 M2.1
Identify and apply strategies to find appropriate solutions complex problems with more than one variable have been explored Select / design appropriate methods / techniques relevant theories and techniques have been applied
M2.3
a range of sources of information has been used
M3
Present and communicate appropriate findings
M3.3
A range of methods of presentation have been used and technical language has been accurately used
Grade Descriptor for DISTINCTION
Possible evidence Feedback Task 2.4, Task 3.2
Task 2.4, Task 3.2 Proper use of Harvard referencing. At least of five appropriate references needed. Documentation is well structured and according to the formatting guidelines with non-overlapping facts.
Possible evidence
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Feedback
Page 7 of 55
D1
Use critical reflection to evaluate own work and justify valid conclusions
D1.3
Self-criticism of approach has taken place
D1.4
D2
D2.3
D3
D3.5
Realistic improvements have been proposed against defined characteristics for subject Take responsibility for managing and organising activities Activities have been managed
Report: shown in the self-reflection section Good conclusion with suggestions for further improvement
Task 1 Gantt chart must be provided at the appendix section and submit the work on time.
Demonstrate convergent / lateral / creative thinking Innovation and creative thoughts have been applied
Create simple GUIs for task 2 and task 3
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 8 of 55
Strengths:
Weaknesses:
Future Improvements & Assessor Comment:
Assessor:
Signature: Date: ____/____/______
Internal Verifier’s Comments:
Internal Verifier:
Signature: Date: ____/____/______
All rights reserved ©ESOFT Metro Campus, Sri Lanka
Page 9 of 55
TABLE OF CONTENTS TABLE OF CONTENTS ...................................................................................................... 1 CONTENT OF THE FIGURES ........................................................................................... xi Task 1 .................................................................................................................................. 13 1.1 .................................................................................................................................... 13 1.2 .................................................................................................................................... 16 1.3 .................................................................................................................................... 18 1.4 .................................................................................................................................... 21 Task 2 .................................................................................................................................. 24 2.1 .................................................................................................................................... 24 2.2 .................................................................................................................................... 25 2.3 .................................................................................................................................... 26 2.4 .................................................................................................................................... 27 2.5 .................................................................................................................................... 28 Task 3 .................................................................................................................................. 33 3.1 .................................................................................................................................... 33 3.2 .................................................................................................................................... 34 3.3 .................................................................................................................................... 38 3.4 .................................................................................................................................... 40 3.5 .................................................................................................................................... 42 APPENDICES ..................................................................................................................... 45 Appendix A: Evolution of Theories on Intelligence ....................................................... 45 Appendix B: The Game of Go ......................................................................................... 46 Appendix C: A brief History of Reasoning ..................................................................... 48 Appendix D: Project Management .................................................................................. 49 REFERENCES .................................................................................................................... 51 BIBLIOGRAPHY ............................................................................................................... 54
x Nilshan Devinda
KBS [ J | 601 | 0459]
CONTENT OF THE FIGURES Figure 1:A typical Go board at a middle of a game. ........................................................... 14 Figure 2: Lee Sedol putting his first stone against AlphaGo in Google's DeepMind challenge match. .................................................................................................................................. 15 Figure 3: Semantic diagram of a man Eating Food. ............................................................ 17 Figure 4: Accuracy results for 3 basic methods of interaction with an arcade game. ......... 20 Figure 5: Backward Chain for finding how to Learn Something. ....................................... 25 Figure 6: Semantic Network. ............................................................................................... 26 Figure 7: First Order Logic Expressions for Determining Action and Venue (Part 01). .... 27 Figure 8: First Order Logic Expressions for Determining Action and Venue (Part 02). .... 28 Figure 9: New facts added to modify the program. ............................................................. 29 Figure 10: Facts and Rules that support modification of the program without altering the existing code. ....................................................................................................................... 30 Figure 11: New Rule which is added to modify the program. ............................................ 31 Figure 12: Semantic Network for Representing Facts and Rules of Animals. .................... 33 Figure 13: Relationships among Animals and their Properties (Part 01). ........................... 34 Figure 14:Relationships among Animals and their Properties (Part 02). ............................ 35 Figure 15: Relationships among Animals and their Properties (Part 03). ........................... 36 Figure 16: Relationships among Animals and their Properties (Part 04). ........................... 37 Figure 17: Relationships among Animals and their Properties (Part 05). ........................... 37 Figure 18: Relationships among Animals and their Properties (Part 06). ........................... 38 Figure 19: Demonstration for Task 3.3 ............................................................................... 39 Figure 20: Keyword phase testing. ...................................................................................... 41 Figure 21: Type check phase testing. .................................................................................. 41 Figure 22: Property check phase testing. ............................................................................. 41 Figure 23: Displaying possible terms in the program. ........................................................ 42 xi Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 24: Displaying the format for a term in the program. .............................................. 43 Figure 25: Two use-cases of the term 'have'. ....................................................................... 43
xii Nilshan Devinda
KBS [ J | 601 | 0459]
Task 1 Internet games have become very popular. Designing a good computer game needs to use 3D graphics and artificial intelligence technologies. Computers are programmed to play chess, scrabble, and even crossword puzzles (American Scientist, September/October 1999). They are getting better and better; in fact, a computer beat the worlds’ number-one chess grand master, Garry Kasparov. 1.1
Do you agree that computer games with artificial intelligence technologies (Expert systems in gaming) exhibit intelligence? Why or why not? (LO.1.1 part ii) The statement could be agreed, based on the evidences provided below that supports the idea that artificial intelligence goes through a similar process that human brains go in the process of making decisions and that it could calculate all the possibilities and consequences for an action, so that its decisions are based on logic which is very common to humans. Intelligence is the ability to learn, reason, calculate and perceive relationships and analogies, and to solve general cognitive problems (Brain Metrix 2016). Robert Sternberg through his Triarchic Theory of Intelligence, points out that this intelligence could be broken down into 3 subsets namely Analytic, Creative, and Practical Intelligence (Sternberg 1988). How various theories on intelligence evolve up to the Triarchic theory could be found on Appendix A. An article published by the Stanford University by McCarthy (2001) states that intelligence could occur in varying kinds and degrees within humans, animals and machines. Sometimes artificial intelligence exhibits methods that requires more computing than humans couldn’t perform and methods that are not observable in humans. Therefore, artificial intelligence is not confined to methods that are biologically observable, demonstrating the fact that comparisons made between artificial intelligence and humans are unfair. The fact that whether computer games really exhibit intelligence, could be studied through common examples of games that claim to exhibit intelligence such as in the ancient game of Go. Go is an ancient strategy board game that’s been popular for more than 2,500 years. It’s a two player game where each player plays with pieces called stones in black or white colors 13 Nilshan Devinda
KBS [ J | 601 | 0459]
in a marked grid of 19x19 lines making a Go board with 361 intersections as shown in figure 01 below (Go Game Guru n.d). The game initially starts with no stones on the board and adding 181 black and 180 white stones into the intersections in the board, and is concerned about which player captures more territory (361 Points n.d). More about Go and its rules are presented in Appendices B and B1 respectively.
Figure 1:A typical Go board at a middle of a game.
Source: American Go Association (2014). Through the comparisons of Bradley (2008) for Chess and Go, which states that 64 squares in Chess vs 361 intersections in Go, and 400 possible first moves in Chess vs that of the 32,490 in the game of Go, makes it clear that Go is way more complex and harder to play than chess. Andrews (2016) reported that there are around 1*10^170 possible configurations in Go, which means there are more configurations than there are atoms in the universe. Whitney (2016) explains that the complexity of Go which lies within the vast number of ways that the stones could be setup and the variety of possible moves and outcomes is what makes Go a greater challenge for artificial intelligence than Chess. AlphaGo is a program built by Google’s DeepMind subsidiary that claims to exhibit enough intelligence to beat a human Go professional. In a game of five matches between AlphaGo and reigning European Go champion Lee Sedol, AlphaGo beats him by four matches to one, demonstrating what machine learning and artificial intelligence could do (Burger 2016). Experts in artificial intelligence called the moment historic while DeepMind’s CEO compared the victory equivalent to landing on the moon (Lee 2016).
14 Nilshan Devinda
KBS [ J | 601 | 0459]
How AlphaGo achieves this level of skill is through machine learning. Studying older games and teasing out patterns. It played with itself millions of times and got slightly better each time, just like human players would practice (BBC 2016). The figure below demonstrates how fair the game was held with Lee Sedol at the right.
Figure 2: Lee Sedol putting his first stone against AlphaGo in Google's DeepMind challenge match.
Source: Pandey (2016). Till AlphaGo beats its former opponent European Champion Fan Hui, intuitional powers were said only to be possible within humans. But AlphaGo used all strategical intuitional powers in order to beat the human player reflecting how that department has been improved within artificial intelligence (Gupta 2016). Deep Blue, the program that beats chess grandmaster Garry Kasparov was able to evaluate 200 million possible moves in a second. The program looked into the future to find the set of moves that leads it to the strongest position, and played them step wise. But that tactic doesn’t work for Go due to the immense difficulty of actually evaluating a move. But unlike Deep Blue, AlphaGo approaches a new way. It could draw its own inferences using previous attempts or watching samples, which mimics the way human brains interprets and processes information (Hern 2016). AlphaGo could learn from trial and error, and learn from mistakes. It could tackle with such a complex and intuitive game such as Go and still manage to beat professionals and champions. It does not use brute force or other kind of cheating, but uses tactics which were very comparative to humans. AlphaGo exhibits all three subsets of intelligence up to a certain level from the Sternberg’s Triarchic theory of intelligence as mentioned previously, 15 Nilshan Devinda
KBS [ J | 601 | 0459]
further demonstrating the similarities present between artificial and human intelligence. Therefore, it is obvious that computer systems like AlphaGo claiming to exhibit artificial intelligence, really exhibit intelligence in their respective kinds and degrees as per the findings of the Stanford University. 1.2
Describe how such computer systems perform inference. (LO.1.1 part iii) The most basic types of knowledge bases are rule based systems, frame-based systems and predicate expressions. Knowledge based systems such as expert systems use its built-in inferencing engine in the process of reasoning. This inferencing engine is a program that can reason about the knowledge base in the required format (Chang 1995). Reasoning could progress in two directions, either forward to the goal or backward from it. These are known in artificial intelligence as forward reasoning and backward reasoning, and are used as methods of inferencing (Finlay & Dix 1996). Cheng, Nara & Goto (2007: p.1) states that ‘Forward reasoning [is a method used] to… draw [conclusions through] inference rules… and [to obtain] conclusions until some previously specified conditions are satisfied’. Therefore, artificial intelligence that uses forward reasoning to draw its conclusions must have previously defined rules upon which its inferences would be based upon. As an example, the figure 3 below shows a semantic diagram representing a man trying to eat food. In this example, the forward reasoning method results in numerous ways that the man could have eaten food. The most basic conclusions drawn here include “Earn Money”, “Steal Food” or “Borrow Money”. In this manner, the inference engine could conclude what the man must do in order to “Eat food”.
16 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 3: Semantic diagram of a man Eating Food.
Source: Umich (n.d). The other method is the opposite of the example above. Backward chaining would tell why the man “Earn Money”, “Steal Food”, or “Borrow Money” and concludes that he did it to “Eat food”. Arenas et al. (2011) explains that backward chaining starts with a list of goals or hypotheses and then searches backwards to find the available data that support any of those goals. Therefore, backward chaining could also be expressed as a goal-driven mechanism. At approximately the same time, Huntington (2011) reveals that backward chaining or backward reasoning is an incredibly powerful method that facilitates the dismantling of complex problems into small, easily defined sections thereby enabling decomposition of complex problems into smaller modules. It is also acceptable that methods such as reinforcement and deep learning provides additional knowledge which aids at inferencing, as they strive to make much more accurate choices. In reinforcement learning, computers are allowed to automatically determine the ideal behavior within a specific context and allows to take the best possible actions through proper ways of inferencing, and when repeated, the problem is said to be a Markov Decision Process (Champandard 2002). Deep learning uses algorithms to learn multiple levels of representation and abstraction to make sense of data thereby helping in the process of inferencing by analyzing solutions and problems in different perspectives than the one at hand (Lisa Lab 2010). Considering the information stated above, it is clear that forward reasoning is better in situations where the fact is clear and where all the possible goals must be predicted. In the 17 Nilshan Devinda
KBS [ J | 601 | 0459]
other hand, backward chaining might help to derive what facts might have caused the goal at hand. The information acquired through deep learning and reinforcement learning allows the inferences to be further refined and make sure they are better in terms of accuracy. The inference engine of all the knowledge base systems listed above uses both forward and backward chaining for appropriate situations in the process of reasoning and thus performs inference. A brief history of reasoning has been provided in Appendix C for further reference.
1.3
Do you agree that using speech communication as the user interface could increase willingness to use expert systems in gaming? Why or why not? (LO.1.1 part ii) The idea that using speech communication in gaming would increase willingness to use expert systems could not be agreed, because while speech communication remains useful over a wide spectrum of applications in day-to day needs, it’s still too clumsy when it comes to gaming. In gaming, the commands are rich in detail providing in depth information of direction, speed, torque, etc. in a single movement of a game controller, which is one of the things that speech communication cannot provide but would instead lead to a slowdown in the entire process. It would also prevent the usage of body parts such as hands and fingers to input commands, limiting the chances for practicing through a trial and error method as speech would completely eliminate physical gestures. Even though speech interaction would survive on some of the slow-paced strategy games such as quizzes, puzzles and chess, it has no rule over the much major areas of gaming where fast-paced arcade games become prominent for the users. Further demonstrating this fact, Cai (2013: p.3) mentioned on his work after an investigation of adapting existing games for education using speech recognition, that ‘it remains difficult to augment fast-paced games with speech interaction because the frustrating effect of recognition errors highly compromises entertainment’. This might be because computers face difficulties dealing with other frequencies and bands of sound rather than the user’s input, and as a result, the communications might seem to be erroneous. However, The University of Washington and IBM Research published alternative reasons for the compromise, which was published after 18 Nilshan Devinda
KBS [ J | 601 | 0459]
they conducted a quantitative experiment to determine performance characteristics of nonspeech vocalization in comparison to existing speech and keyboard inputs. They outlined some of the critical factors that keep speech communication away from being an effective hands-free game control modality by clearly mentioning the following points in their work.
Time taken to complete uttering a word or a phrase is a significant factor in games which requires sub-second timing. Processing time required to recognize the utterances adds further delay to the process.
Speech communication does not support dynamically inputting multiple input events at a rapid and varying rate because of the per-utterance delay which imposes a limit on the number of utterances per given time span.
When using speech communication, output is not generated until a word is recognized. The result of a single recognition is a single event; therefore, it eliminates continuous inputs such as smooth motion of a pointer.
The work also demonstrates that speech when used as vocal inputs consumes more processing power and takes more time to recognize than non-speech vocalization by stating non-speech vocalization is significantly faster than the other by as much as 50% (Harada, Wobbrock & Landay 2011). The following Graph on figure 4 illustrates results from an episode test from a report aimed at comparing user performance when controlling an arcade game using speech, non-verbal communication and keyboard input. The accuracy of the inputs has been measured against pre-defined episodes repeated in 3 difficulty levels mentioned as slow, medium and fast.
19 Nilshan Devinda
KBS [ J | 601 | 0459]
ACCURACY AVERAGES Keyboard Input
Speech Communication
Non-Verbal Speech Communication
1.2 1 0.8 0.6 0.4 0.2 0 SLOW
MEDIUM
FAST
Figure 4: Accuracy results for 3 basic methods of interaction with an arcade game.
Source: Author’s work adopted from Sporka et al. (2006). Out of the many conclusions that could be drawn from the above graph, it is obvious that it protrudes 2 basic facts about the accuracy of the communication modes at each phase of the game. 1Keyboard input is the most accurate method of inputting commands and speech communication is the least within the scope of the experiment. 2The speech communication method falls in accuracy dramatically with the 3 difficulty levels from 0.7 on the first, all the way down to 0.2 thereby proving to be a less reliable mode of interaction with the game, while keyboard input accuracy rises in the medium level and falls but still remains at 0.9 in the third level. The presence of difficulty levels and the data achieved at each level in this experiment helps in understanding that the problem lies within the users and not in the processing power of computers. Therefore, improving speech as a reliable method for interaction, require its users to improve in the area as well. Hence, it demonstrates that speech communication does not succeed with other methods of interaction when it comes to gaming. However, it is still acceptable for speech communication to be used as a supportive and an alternative interface in games wherever such technologies could be applied within the game. For instance, it would still be possible to use speech interaction to control the game menu whereas a controller would be necessary in the actual game play, however it would always be an alternative to using the controller on the menu screen and not a substitute feature.
20 Nilshan Devinda
KBS [ J | 601 | 0459]
Speech as a method of communication in gaming is not a complete unachievable target, in fact, there are a lot of slow paced games that still rely on the technology, however as a result, from the barriers introduced above, speech wouldn’t be a better mode of communication and would in no case be an aid or will not contribute to the effort of increasing willingness to use expert systems in gaming. The technology could be improved with time, but would not guarantee to increase willingness in using expert systems in gaming.
1.4
Proponents of AI claim that we will never have machines that truly think because they cannot, by definition, have a soul. Supporters claim a soul is unnecessary. They cite the fact that argue humanity originally set out to create an artificial bird for flight. Instead it eventually created an airplane which is not a bird, but functionally acts as one. Debate the issue. (LO.1.1 part iii) Technologies improve over time, and it takes a lot of time for a certain technology to reach to its maximum possible. Functionality is what matters; in whatever way it may work. Even though airplanes don’t look and fly like birds, they functionally do the same, using different technologies. There is no need of waiting until the technology improves to a level of a bird to be able to make use of airplanes. In fact, one could argue that even a bird is not the perfect flying machine, they too may be improved with time. But it’s enough for them to survive and do their role at present, and their future generations might be more efficient and effective in what they do, as a result of them adapting to their environment. It is wise to understand that animals such as birds are taken as examples to model their functionality so that people could use similar technologies to meet the required needs. It is in no way that flying like a bird is the target for airplanes, birds are just the model which inspired humans to make one. A similar argument was introduced by Baer (2016) where artificial neurons when compared to a real neuron is like an airplane is to a bird. The argument was concluded that even though they are very different at a certain level of a detail, they both do the same job and that similarly, even though artificial neurons were silicon based where real neurons are carbon based, again they both do the same job. Anderson et al. (2013: pp.90-91) mentions on his work that the first computational neuron was created in 1943 by McCulloch and Pitts and consisted of logical elements and that its properties are said to be analogous to those of an 21 Nilshan Devinda
KBS [ J | 601 | 0459]
action potential of a neural cell. With time, in 1958, Frank Rosenblatt developed an artificial neural net in which firing of actual neurons were reflected through On or Off states of simple neural nods (Postma 2016). Considering these two cases which shows development of artificial neural nets within considerably short periods of time, it is clear that artificial neural nets though not being real neurons, would do the same job by reflecting action potentials in the first case and by firing neurons in the second; and with time, will improve, further adding more functionality. It is good practice to appreciate the work done by such systems than to criticize them of their variations from their origins, as those systems are more likely to benefit humanity in ways that were previously impossible. In a more exemplary approach to this debate, it is possible to address on the many uses of artificial intelligence in everyday life such as the Automatic Alternative Text technology which Facebook recently introduced. The technology is based on neural networks and uses artificial intelligence to identify content in a photo and describe the photos to visually impaired people. The company made references concerning the 39 million people who are blind and 246 million people who were severely visually impaired and mentioned that their effort allows such people to experience social media the same way others do (Wu, Pique & Wieland 2016). In contrast, Microsoft also developed an app called Seeing AI which uses a camera on a smartphone or a pair of camera equipped smart glasses to describe what happens around the environment in real time. Instead of photos, it could dynamically understand the environment and notify its users who are visually impaired (Smith 2016). These examples prove that even though the technologies are not equivalent to human level thinking, it would still help people in many ways. And when considering some cases, human level intelligence cannot provide the services that certain parties require, just because humans are more intelligent than the job they are assigned to do. For example, there is no guarantee that a worker on a customer care center would treat or speak with every person, in the same consistent way that they’ve been assigned to do. In this case, many companies move on to artificial intelligence technologies which could provide facilities like consistency for their end users, because human employees struggle at this level of functioning where artificial intelligence might be an expert (VFU 2016). This is an example that demonstrates that, it is not necessarily required that computers would have to truly think or act like humans, but only requires to perform some of the functions which humans could perform in order to be able to aid humans at work.
22 Nilshan Devinda
KBS [ J | 601 | 0459]
Through the examples and facts mentioned above, an approach to a clear conclusion on the matter might seem obvious that computers could think, and that it has nothing to do with a soul but everything with the knowledge, power and infrastructure to support inferences leading to the performing of better functions similar to humans. It would still remain obvious that, to use a certain technology, it does not need to be perfect, it just need to be above threshold. The examples mentioned above supports the idea that if artificial intelligence technology were to be not used until it becomes equivalent to human level intelligence or to a level that it could truly think, it would leave a question whether people with disabilities or people that now rely on the technology are to be benefited or not; don’t they have rights to socialize and explore? These questions highly promote and encourages the need to get use of these technologies in whatever stages they may be right now. Someday, airplanes would make it to the level of birds, and perhaps better than them and that artificial intelligence would be better than human intelligence. However, they would have different uses at that time, but for now, it is wise to make use of airplanes as well as artificial intelligence where possible to make life easier.
23 Nilshan Devinda
KBS [ J | 601 | 0459]
Task 2 Consider the decision-making situation defined by the following rules: If it is a nice day and it is summer, then I go to the beach If it is a nice day and it is winter, then I go to the canal boating resort If it is not a nice day and it is summer, then I go to work If it is not a nice day and it is winter, then I go to class If I go to the beach, then I swim. If I go to the canal boating resort, then I go boat riding If I go boat riding or I swim, then I have fun. If I go to work, then I make money. If I go to class, then I learn something.
Follow the rules for the following situations (what do you conclude for each one?): It is a nice day and it is summer. It is not a nice day and it is winter. It is a nice day and it is winter. It is not a nice day and it is summer.
2.1
Are there any other combinations that are valid? Explain. (LO.1.1 part I, LO.1.1-part iv) It is a nice day and it is summer, then I go to the beach. It is not a nice day and it is winter, then I go to the class. It is a nice day and it is winter, then I go to canal boating resort. It is not a nice day and it is summer, then I go to work.
It is a nice day and it is summer, so I go to the beach, then I swim. It is a nice day and it is winter, so I go to canal boating resort, then I ride boat.
24 Nilshan Devinda
KBS [ J | 601 | 0459]
2.2
What needs to happen for you to “learn something” in this knowledge universe? Start with the conclusion “learn something” and identify the rules used (backward) to get to the needed facts. (LO.1.1) I’ll “learn something” If I go to class if it is winter and it is not a nice day. (As illustrated in the backward chain of the semantic graph in figure 5 below)
Figure 5: Backward Chain for finding how to Learn Something.
Source: Author’s work.
25 Nilshan Devinda
KBS [ J | 601 | 0459]
2.3
Encode the knowledge into a graphical diagram (Semantic network). (LO.2.1) The following diagram in figure 6 shows the semantic network of the example given in task 02.
Figure 6: Semantic Network.
Source: Author’s work.
26 Nilshan Devinda
KBS [ J | 601 | 0459]
2.4
Write a prolog program or other third-generation language program. Use IFTHEN (ELSE) statements in your implementation. (LO.2.2), (LO.3.1)
Figure 7: First Order Logic Expressions for Determining Action and Venue (Part 01).
Source: Author’s work.
27 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 8: First Order Logic Expressions for Determining Action and Venue (Part 02).
Source: Author’s work.
2.5
Explain how hard would it be to modify the program to insert new facts and a rule such as: If it is cloudy and it is warm and it is not raining and it is summer then I swim (LO.3.2), (LO.3.3) The number of facts that would relate to the rule for the above mentioned information is more than 5, therefore it would be really hard to relate all of those facts so that they provide meaningful information. However, the following passages describes the procedure in which the rule could be achieved. It is clear from figure 7 above, that structures for both day and season already exist within the program, and that the argument “summer” comes under the predicate “season” which already exist. If correctly inserted, the additional rules and facts could be applied to the program with ease in a similar manner, so that they become meaningful and would therefore, flow with the logic. For instance, a structure for Temperature, Weather and another for the status of Rain could be added to the program. Within this context, the “Temperature” predicate would be able to hold arguments such as “warm” and “cold” whereas the “Weather” predicate holds arguments such as “cloudy” or “windy” while the “status of rain” predicate holds arguments such as “raining” and “not_raining”. This way, the facts become 28 Nilshan Devinda
KBS [ J | 601 | 0459]
more structured and would make sense. The code in figure 9 below shows how the facts could be organized in the code editor and how they become unique by adapting its format to a recursive structure (Nested Complex Term) where arguments becomes functors itself which makes them unique by varying the arity of the structure even in the presence of an overloaded predicate.
Figure 9: New facts added to modify the program.
Source: Author’s work. The figure above shows the possessing of additional arguments such as “windy” and “cold” just to highlight the importance of recursive structures in such a problem so that the predicate could be further overloaded to add more facts in the future. Since the new facts were added, it provides the infrastructure to build a new rule that could help to query the necessary information from the system. Initially, the program held the knowledge that “if it is a ‘nice day’ and it is ‘summer’ then I go to the ‘beach’”. It also had the knowledge regarding the action performed once on the beach, as “If I go to the ‘Beach’ then I ‘Swim’”. The inked image below demonstrates the presence of both the instants where upon the new rule would be supported.
29 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 10: Facts and Rules that support the modification without altering the existing code.
Source: Author’s work. This logic could be used to add a new rule to modify the program to add the additional knowledge. Consider figure 11 below which shows the recently added rule that has been inked to help explain what’s written in the content below.
30 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 11: New Rule which is added to modify the program.
Source: Author’s work. Further breaking down what’s shown on figure 11 would help in understanding how the logic works and how an addition of a new rule didn’t affect the original structure of the program. At its simplest form, the arguments “Weather”, “Temperature” and “Rain_status” shown on the image resembles a state for the “Day”. Figure 10 showed that “Day” and “Season” brings forth “Venue”, And “Venue” in turn results in an “Action”. If “Action” is triggered by “Venue” and that “Venue” is formed by “Day” and “Season”, it is possible that the “Action” could be brought forth by providing the status of “Day” and the necessary “Season”. This is similar to the popular axiom ‘if A=B, and B=C therefore C=A’. Forgetting about the body of the rules for a moment, consider the following lines of codes in each point to better understand how they breakdown to prove its relation to each other.
action (Action (Venue)) – rule for action.
venue (Venue (Day, Season)) - rule for venue.
Now the arguments of the “Venue” could be applied in place of “Venue” in the action rule.
action (Action (Day, Season)) – venue replaced by its own definition.
Similarly, “Day” could be replaced by its definition, if “Day” consists of arguments such as “Weather”, “Temperature” and “Rain_status”, then they could be substituted where day lies within the “Action” rule, which will look like the following point.
31 Nilshan Devinda
KBS [ J | 601 | 0459]
action (Action (Weather, Temperature, Rain_status), Season) – day replaced by its definition.
This is how the rule came into existence and make sense in terms of logical flow and relations. Because now the combination of weather, temperature and rain status forms “Day” and “Day” combines with “Season” to form “Venue” which could be the means of accessing the required “Action”, provided that venue is the only way to access an action. The facts are new, and so is the rule. Even though the new rule is inspired by the pattern of the existing code, it is still a stand-alone rule which does not affect the program in any way. In fact, even if the new rule is removed from the code, the program would still work fine as the new rule didn’t alter any of the facts or complex terms in the program. And even though the rule relies on the existing facts and rules, none of the rules in the program relies on the new rule allowing them to operate independently.
32 Nilshan Devinda
KBS [ J | 601 | 0459]
Task 3 All animals have skin. Fish is one kind of animal, birds are another type and m ammals are a third kind. Normally fish has gills and can swim, while birds have wings and can fly. While fish and birds usually lay eggs, mammals d o not. Although sharks are fish, they do not lay eggs. They are very dangerous. Salmo n is another fish and is considered a delicacy. Canary is a bird and is yellow. Ostrich is a bird, which is very tall, but cannot fly, only walk. 3.1
Represent the above facts and rules in First Order Logic expressions. (LO.2.1)
Figure 12: Semantic Network for Representing Facts and Rules of Animals.
Source: Author’s work.
33 Nilshan Devinda
KBS [ J | 601 | 0459]
3.2 Write a prolog program or other third-generation language program and knowledge base to execute this knowledge. (LO.3.1), (LO.2.3)
Figure 13: Relationships among Animals and their Properties (Part 01).
Source: Author’s work.
34 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 14:Relationships among Animals and their Properties (Part 02).
Source: Author’s work.
35 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 15: Relationships among Animals and their Properties (Part 03).
Source: Author’s work.
36 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 16: Relationships among Animals and their Properties (Part 04).
Source: Author’s work.
Figure 17: Relationships among Animals and their Properties (Part 05).
Source: Author’s work. 37 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 18: Relationships among Animals and their Properties (Part 06).
Source: Author’s work. 3.3 Using your program answer the following questions (LO.3.2)
Can canaries fly?
Yes
What is the color of canaries?
Yellow
Can ostriches fly?
Do canaries have skin?
No
Yes
Are sharks dangerous?
Yes
38 Nilshan Devinda
KBS [ J | 601 | 0459]
The image below demonstrates how the above answers are achieved through the questions queried on the SWI-Prolog IDE.
Figure 19: Demonstration for Task 3.3
Source: Author’s work.
39 Nilshan Devinda
KBS [ J | 601 | 0459]
3.4 Complete the above system test using proper test cases and provide all the test documents and error handlings. (LO.3.4) (LO.3.3) The table below shows the White-Box test results on testing the program for the knowledge provided through the task 3 based scenario. The 3 images below provide proof on how the test case was carried out in each phase. Table 1: White-Box Testing of the Program.
Test Phase Name
Test Num ber
01 Keywor d Checkin g 02
Type Checkin g
03
Test Case Checking whether all Keywords are being displayed Checking whether keywords guides with the format
Checks the Type of a given Species
04
05 Propert y Checkin g
06 07
Check if both the Species unique properties, and the inherited properties have been applied
08
Case Definition Display possible predicates in place of variable ‘X’ Display possible predicates for ‘X’ and its format in place of ‘Y’ Display the Type of the species ‘Canary’ Check if Shark is a Fish Check if canary has skin Check if Shark lay eggs Check if an Ostrich can fly Check if Salmon are Delicious
Query Representa tion
Test Statu s
Expected Result
Actual Result
List all possible predicates that could be queried
Lists all the predicates that could be queried
key(X, Y)/2
List all possible predicates and their correspon ding formats
Successful listing of all predicates and formats
isan(canary, Type)/2
bird
bird
Pass
isan(shark, fish)/2
True
True
Pass
have(canary, skin)/2
True
True
Pass
lay(shark, eggs)/2
False
False
Pass
can(ostrich, fly)/2
False
False
Pass
are(salmon, delicious)/2
True
True
Pass
key(X)/1
Pass
Pass
Source: Author’s work.
40 Nilshan Devinda
KBS [ J | 601 | 0459]
Figure 20: Keyword phase testing.
Source: Author’s work
Figure 21: Type check phase testing.
Source: Author’s work.
Figure 22: Property check phase testing.
Source: Author’s work.
41 Nilshan Devinda
KBS [ J | 601 | 0459]
3.5 Prepare a user document to illustrate how to work with your implemented system. (LO.3.5) 3.5.1 The User Manual This document assumes that the user has some basic knowledge concerning the Prolog environment and about querying. The user must have a working version of SWI-Prolog installed on his/her computer in order to access the program.
Getting Started
Copy the files found on the drive provided, into a library in one of the directories of the computer. Open the SWI-Prolog desktop application and click ‘File’ in the Splitter containers found on top of the application. In the menu that scrolls down, click ‘Consult’ and look for the pop-up window that appears. Use the pop-up menu to navigate the copied prolog files containing the program in the computer. Once on the particular path, click the prolog file which came with the drive and click ‘Open’ to finish consulting.
On-Screen Help
The program comes with a handful of terms packed in, to allow queries to be made using the program. In effort of providing on-screen help, the program packs a special keyword that lets the user receive on-screen help in identifying the terms and formats from which the queries should be made. Type ‘key(X).’ in the prolog querying environment and press enter. Together with the first term that appears, press semi-colon to achieve more results that ends up displaying all the possible terms for querying as shown in the image below. These terms are meant to be used as predicates for the query.
Figure 23: Displaying possible terms in the program.
Source: Author’s work.
42 Nilshan Devinda
KBS [ J | 601 | 0459]
Note that the above example uses a keyword with Arity 1 (key(X)/1), which means there is only one argument inside parenthesis. To display the format in which the queries should be made using the terms appeared in the program, use the very same keyword with Arity 2 (key(X, Y)/2), and replace the first argument with one of the terms displayed on the above example. This will display the correct format that the query should be made relating to the term in use as seen through the next image, which shows how the ‘have’ term is used to display its format.
Figure 24: Displaying the format for a term in the program.
Source: Author’s work. Figure 24 shows the format in which the ‘have’ term must be used in order to perform a query from the program. It shows that ‘have’ must include 2 arguments inside parenthesis and that they could both be used as variables given that they start with upper-case letters. It further shows that the first argument should be a ‘Species’ and the second should be a ‘Property’ for that species. As an example, the figure 25 below shows two instances where this format and term could be used in the program to know 1what a canary has, by using the second argument as a variable and to 2ask whether a canary has skin.
Figure 25: Two use-cases of the term 'have'.
Source: Author’s work.
Additional Assistance
It is best practice to always use the above introduced keyword as a means of guidance for the program. The table below shows all the terms and formats which could be used to query the necessary information from the program. Note, that the data present on this table might not get supported for the program in the future unless otherwise, this document would get updated with each and every modification to the program.
43 Nilshan Devinda
KBS [ J | 601 | 0459]
Table 2: Keywords and Formats.
Functor
Format
key
key(X).
key
key(X, Y).
isan
isan(Species, Type).
have
have(Species, Property).
lay
lay(Species, Property).
are
are(Species, Property).
iss
iss(Species, Property).
can
can(Species, Property).
Source: Author’s work.
44 Nilshan Devinda
KBS [ J | 601 | 0459]
APPENDICES Appendix A: Evolution of Theories on Intelligence The first person to bring forth a theory on intelligence is Herbert Spencer (1820 – 1903) in the 19th century, who developed a substantive theory which implied individual, racial, and species differences in intelligence. He held the idea that intelligence is determined by the quantity and quality of adaptive associations made by organisms to their environment by the continuous adjustment of internal relations and external relations. However, Francis Galton (1822 – 1911) and Alfred Binet (1857 – 1911) was also credited as the first to develop theories on intelligence and instruments to measure it. Galton was Charles Darwin’s half cousin. He claimed that sensory acuity was correlated with intelligence and could serve as an indirect method of measuring intelligence. He also said that intelligence was largely determined by heredity. In a similar manner Jean Piaget explains that "Intelligence is an adaptation…To say that intelligence is a particular instance of biological adaptation is thus to suppose that it is essentially an organization and that its function is to structure the universe just as the organism structures its immediate environment" and that "Intelligence is assimilation to the extent that it incorporates all the given data of experience within its framework…There can be no doubt either, that mental life is also accommodation to the environment. Assimilation can never be pure because by incorporating new elements into its earlier schemata the intelligence constantly modifies the latter in order to adjust them to new elements". After these kinds of initial work on understanding intelligence and measuring it, Howard Gardner reveals his theory of Multiple intelligences in 1983. He viewed intelligence as "the capacity to solve problems or to fashion products that are valued in one or more cultural settings". After Gardner’s work on intelligence came Robert Stenberg who brought the Triarchic theory of Intelligence, dividing intelligence into three aspects as mentioned in chapter 2.1 of this document. Afterwards, Sternberg said that individuals who excel in all three subsets of intelligence from his previous theory could be considered to have successful intelligence. This second theory is known as Sternberg’s Theory of Successful Intelligence.
45 Nilshan Devinda
KBS [ J | 601 | 0459]
Appendix B: The Game of Go Go which is also referred as Weiqi, Romaji, Romaja, and Baduk is a two player abstract strategy board game meaning “encircling game”. The objective of this game is to surround more territory than the opponent. It rooted from the ancient China more than 2,500 years ago. The two players use black vs white pieces of stones in the vacant intersections of the Go board. Normally, there are 19x19 intersections making a total of 361 in a standard Go board. However, there are Go boards which have grid lines both below and above this limit. As an example, there are boards of 13x13 and or 17x17 or even 9x9 grid lines. The image below represents a typical 19x19 grid Go board at gameplay.
The two players place stones alternately until they reach a point at which neither player wishes to make another move; the game has no set ending conditions beyond this. The game ends when both players pass, and players pass when there are no more profitable moves to be made. The game is then scored: The player with the greater number of controlled points, factoring in the number of captured stones and Komi, wins the game. Komi are the points added to the score of the player with the white stones as compensation for playing second. When a game concludes, the territory is counted along with captured stones and Komi to determine the winner, but it may also be won by resignation, for example if a player has lost a large group of stones, the opponent will win at that situation.
46 Nilshan Devinda
KBS [ J | 601 | 0459]
Appendix B1: Rules of Go Game The rules of Go have seen some variation over time and from place to place, for example the Chinese rules for Go differ from the Korean and Japanese rules. However, the Go is overall consistent in many areas within these communities. The table below lists down some of the basic and common Japanese rules of Go.
Rule
Description
Stone that may exist
After a move is completed, a group of one or more stones belonging to one player exists on its points of play on the board as long as it has a horizontally or vertically adjacent empty point, called a "liberty." No group of stones without a liberty can exist on the board.
on the board
Capture
If, due to a player's move, one or more of his opponent's stones cannot exist on the board, the player must remove all these opposing stones, which are called "prisoners." In this case, the move is completed when the stones have been removed.
End of the Game
1. When a player passes his move and his opponent passes in succession, the game stops. 2. After stopping, the game ends through confirmation and agreement by the two players about the life and death of stones and territory. This is called "the end of the game." 3. If a player requests resumption of a stopped game, his opponent must oblige and has the right to play first.
Determining the Result
1. After agreement that the game has ended, each player removes any opposing dead stones from his territory as is, and adds them to his prisoners. 2. Prisoners are then filled into the opponent's territory, and the points of territory are counted and compared. The player with more territory wins. If both players have the same amount the game is a draw, which is called a "jigo." 3. If one player lodges an objection to the result, both players must reconfirm the result by, for example, replaying the game. 4. After both players have confirmed the result, the result cannot be changed under any circumstances.
47 Nilshan Devinda
KBS [ J | 601 | 0459]
Appendix C: A brief History of Reasoning The table below illustrates some of the basic information regarding how reasoning solutions were developed with time since 450 B.C till the late 1965 by their respective founders.
Year
Founder
Description
450 B.C
Stoics
322 B.C
Aristotle
“syllogisms” (inference rules), quantifiers
1565
Cardano
probability theory (propositional logic + uncertainty)
1847
Boole
propositional logic (again)
1879
Frege
first-order logic
1922
Wittgenstein
1930
G¨odel
1930
Herbrand
1931
G¨odel
¬∃ complete algorithm for arithmetic
1960
Davis/Putnam
“practical” algorithm for propositional logic
1965
Robinson
propositional logic, inference (maybe)
proof by truth tables ∃ complete algorithm for FOL complete algorithm for FOL (reduce to propositional)
“practical” algorithm for FOL— resolution
48 Nilshan Devinda
KBS [ J | 601 | 0459]
Appendix D: Project Management
Appendix D1: The Gantt Chart
Time Duration: 21 days [ 3 Weeks]
18/07/16 19/07/16 20/07/16 21/07/16 22/07/16 23/07/16
Week 3
15/07/16 16/07/16 17/07/16
Week 2
05/07/16 06/07/16 07/07/16 08/07/16 09/07/16 10/07/16 11/07/16 5 12/07/16
03/07/16 6 04/07/16
Date
Week 1
13/107/1 6 14/07/16
Phases
Chapter 1 Chapter 2 Chapter 3 Finalization
49 Nilshan Devinda
KBS [ J | 601 | 0459]
Appendix D2: Self-Reflection
As I came about doing this work, I found that the language and style of instructing the computer was completely different when dealing with first order logic expressions to solve problems than that for the paradigms of other languages I’ve learned before. It might be the cause that Prolog presents a set of styles and rules for the syntax of the code itself, rather than a set of language specific keywords. And having a perfectionist’s mindset to a certain level that troubles me to follow a ‘do your best or drop’ route, at times I found it really hard to figure out how should the system meet the level of quality I had on my mind, because I was depressed for my sluggish attainments as a novice. But with much experimenting, I learnt how to achieve things without compromising what was expected, due to my tendency to refuse giving up the initial thought on the solution, as it would leave me with a powerful reminisce to look upon. I also found that to solve a particular problem, the language supports different methods and that each approach whether it is easy or difficult would lead to similar results. The approach I have taken resembles how much I was interested in learning the language and how the methods I’ve used, helps me regain understanding of it If I were to look back at it later on. However, the action resulted in procrastination where I felt uncomfortable with the deadlines. I did found many things interesting on the subject, such as the ability to define unique complex terms based on their signatures rather than predicate names, however I do find myself confusing with some technical wording when it comes to theory where I find it difficult to differentiate between terms such as ‘Functor’ with ‘Predicate’ due to the fact that many educating bodies use the words interchangeably. I dealt with such terms by classifying them and jotting them down to personal notes through research on the area I was coping with, so that I would often remember them. It is also my fault that at certain points I have debated topics to my personal opinion even though there were lack of evidence supporting my conclusions. Such points reflect my inability to critically compare, and highlights my inability to transcribe direct quotations where I relied heavily on my old style of paraphrasing and risked altering the original meaning of the work I used. I was also not aware of comparing work of others so that I could always include the most timely, accurate and reliable information to support my suggestions. Even though these things degrade the quality of my work, I found myself pushed to be a critical and reflective learner through the experiences I had dealing the trials raised upon completion of the assignment. 50 Nilshan Devinda
KBS [ J | 601 | 0459]
REFERENCES 361
Points.
(no
date)
Game
of
Go/Baduk/Weiqi.
[Online]
Available
from:
[Online]
Available
from:
http://361points.com/whatisgo/ [Accessed 04th July 2016]. American
Go
Association.
(2014)
What
is
Go.
http://www.usgo.org/what-go [Accessed 05th July 2016]. Anderson, H., Dieks, D., Gonzalez, W.J., Uebel, T. & Wheeler, G. (ed.) (2013) The Value of Computer Science for Brain Research. In: New Challenges to Philosophy of Science. Philosophy of Science in a European Perspective, 4. New York, Springer, pp. 90-91. Andrews, R. (2016) Google’s AlphaGo beats Go champion 4-1 in Landmark Victory for AI. [Online] Available from: http://www.iflscience.com/technology/googles-alphago-beats-gochampion-4-1-landmark-victory-artificial-intelligence [Accessed 07th July 2016]. Arenas, M., Schneider, P. F. P., Polleres, A., Amato, C. D., Handschuh, S., Kroner, P. & Ossowski, S. (ed.) (2011) Reasoning Web. Semantic Technologies for the Web of Data: 7th International Summer School 2011, Galway, Ireland, August 23-27, 2011, Tutorial Lectures. Information Systems and Applications, incl. Internet/Web, and HCI, 6848. Ireland, Springer Science & Business Media. Baer, D. (Monday 11th April 2016) Google and Microsoft are making Gigantic Artificial Brains. Business Insider. [Online] Available from: http://www.techinsider.io/google-andmicrosoft-are-making-artificial-brains-2016-4 [Accessed 15th July 2016]. BBC. (Saturday 12th March 2016) Artificial Intelligence: Google’s AlphaGo beats Go master Lee Se-dol. BBC News. [Online] Available from: xhttp://www.bbc.com/news/technology35785875 [Accessed 08th July 2016]. Bradley, M.N. (2008) Comparison between Chess and Go. [Online] Available from: http://users.eniinternet.com/bradleym/Compare.html [Accessed 05th July 2016]. Brain
Metrix.
(2016)
Intelligence
Definition.
[Online]
Available
from:
http://www.brainmetrix.com/intelligence-definition/ [Accessed 03rd July 2016]. Burger, C. (2016) Google DeepMind’s AlphaGo: How it works. [Online] Available from: https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/
[Accessed
06th
July 2016]. 51 Nilshan Devinda
KBS [ J | 601 | 0459]
Cai, C. J. (2013) Adapting Existing Games for Education Using Speech Recognition. Master’s thesis. Massachusetts Institute of Technology. Champandard, A. J. (2002) Reinforcement Learning. [Online] Available from: http://reinforcementlearning.ai-depot.com/ [Accessed 12th July 2016]. Chang, H. M. H. (1995) Relational artificial intelligence system. US 5473732 A (Patent). Cheng, J., Nara, S. & Goto, Y. (2007) FreeEnCal: A Forward Reasoning Engine with General-Purpose. In: Knowledge-Based Intelligent Information and Engineering Systems. Lecture Notes in Computer Science, 4693. Berlin, Springer Berlin Heidelberg, p. 1. Finlay, J. & Dix, A. (1996) Reasoning. In: An Introduction To Artificial Intelligence. Boca Raton, CRC Press, p. 33. Go
Game
Guru.
(no
date)
What
is
Go.
[Online]
Available
from:
https://gogameguru.com/what-is-go/ [Accessed 04th July 2016]. Gupta, N. (Thursday 28th January 2016) Computer Program Wins the Ancient Game ‘GO’ Against Professional Human Player. The News Recorder. [Online] Available from: http://www.thenewsrecorder.com/computer-program-wins-the-ancient-asian-game-goagainst-professional-human-player [Accessed 06th July 2016]. Harada, S., Wobbrock, J. & Landay, J.A. (2011) LNCS. Voice Games: Investigation into the use of Non-Speech voice input for making computer games more Accessible, [Online] 6946, 11-29
Available
from:
https://faculty.washington.edu/wobbrock/pubs/interact-11.pdf
[Accessed 14th July 2016]. Hern, A. (2016) Google’s Artificial Intelligence Machine to battle Human Champion of ‘Go’. [Online] Available from: http://www.theguardian.com/technology/2016/mar/07/goboard-game-google-alphago-lee-se-dol [Accessed 09th July 2016]. Huntington, D. (2011) Back to Basics- Backward Chaining: Expert System Fundamentals. [Online] Available from: http://www.exsys.com/pdf/BackwardChaining.pdf [Accessed 10th July 2016]. Lee, Y. (2016) All about Go, the ancient game in which AI bested master. [Online] Available from: http://phys.org/news/2016-03-ancient-game-ai-bested-master.html [Accessed 07th July 2016].
52 Nilshan Devinda
KBS [ J | 601 | 0459]
Lisa
Lab.
(2010)
Deep
Learning
Tutorials.
[Online]
Available
from:
http://deeplearning.net/tutorial/ [Accessed 12th July 2016]. McCarthy, J. (2001) What is Artificial Intelligence. [Online] Available from: http://lidecc.cs.uns.edu.ar/~grs/InteligenciaArtificial/whatisai.pdf
[Accessed
03rd
July
2016]. Pandey, A. (Saturday 12th March 2016) Against World champion Lee Sedol. IB Times. [Online] Available from: http://www.ibtimes.com/google-deepminds-alphago-wins-5match-go-series-third-straight-victory-against-world-2335217 [Accessed 09th July 2016]. Postma, E. (2016) Deep Learning: The Third Neural network wave. [Online] Available from: https://www.tilburguniversity.edu/research/institutes-and-research-groups/data-sciencecenter/blogs/data-science-blog-eric-postma/ [Accessed 16th July 2016]. Smith, D. (Monday 21st April 2016) Microsoft’s Incredible new app helps blind people see the world around them – take a look. Business Insider. [Online] Available from: http://www.techinsider.io/microsoft-seeing-ai-app-photos-video-2016-4
[Accessed
23rd
July 2016]. Sporka, A.J., Kurnlawan, S.H., Mahmud, M. & Slavik, P. (2006) Non-Speech Input and Speech Recognition for Real-time control of Computer Games. Czech Technical University in Prague & University of Manchester. Sternberg, R. (1988) The Triarchic Mind: A New Theory of Intelligence. New York, Viking Press. Umich.
(no
date)
Knowledge
Acquisition.
[Online]
Available
http://groups.engin.umd.umich.edu/CIS/course.des/cis479/lectures/es-ka.html
from:
[Accessed
10th July 2016]. VFU. (2016) Artificial Intelligence and Expert Systems: Knowledge-Based Systems. [Online]
Available
from:
http://vfu.bg/en/e-Learning/Artificial-Intelligence--
AI_and_ES_Nowledge_base_systems.pdf [Accessed 11th July 2016]. Whitney, L. (2016) AlphaGo wins a close one to wrap up battle of man vs machine. [Online] Available from: http://www.cnet.com/news/google-alphago-artificial-intelligence-victor-ingame-of-man-vs-machine/ [Accessed 06th July 2016].
53 Nilshan Devinda
KBS [ J | 601 | 0459]
Wu, S., Pique, H. & Wieland, J. (2016) Using Artificial Intelligence to Help Blind People ‘See’ Facebook. [Online] Available from: http://newsroom.fb.com/news/2016/04/usingartificial-intelligence-to-help-blind-people-see-facebook/ [Accessed 19th July 2016].
BIBLIOGRAPHY Ajlan, A. A. (2015) The Comparison between Forward and Backward Chaining. International Journal of Machine Learning and Computing, 5 (2), 106 – 113. Barros, L. N. & Trevizan, F. W. (2015) Reachability-based model reduction for Markov decision process. Journal of the Brazilian Computer Society, 21 (5), 1-16. Cohen, L., Pooley, J. A., Stewart, A. C., Penner, L. A., Roy, E. J., Bernstein, D. A., Provost, S., Gouldthorp, B. & Cranney, J. (2013) Cognitive Abilities: Intelligence and Intelligence testing. In: Psychology: An International Discipline in Context: Australian & New Zealand Edition PDF. Australia, Cengage Learning Australia, p. 383. Dean, T. & Kanazawa, K. (1989) A model for reasoning about persistence and causation. Computational Intelligence, 5 (2), 142 – 150. Dubois, D. & Prade, H. (1991) Fuzzy sets in approximate reasoning, Part 1: Inference with possibility distributions. Fuzzy Sets and Systems, 40 (1), 143 – 202. Muller, V. C. (ed.) (2013) Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, 5. Berlin, Springer-Verlag. Norman, G. R., Brooks, L. R., Colle, C. L. & Hatala, R. M. (1999) The Benefit of Diagnostic Hypotheses in Clinical Reasoning: Experimental Study of an Instructional Intervention for Forward and Backward Reasoning. Cognition and Instruction, 17 (4), 433 – 448. Sharma, T., Tiwari, N. & Kelkar, D. (2012) Study of Difference between Forward and Backward Reasoning. International Journal of Emerging Technology and Advanced Engineering, 2 (10), 271 – 273.
54 Nilshan Devinda
KBS [ J | 601 | 0459]
Stallman, R. M. & Sussman, G. J. (1977) Forward reasoning and dependency-directed backtracking in a system for computer-aided circuit analysis. Artificial Intelligence, 9 (2), 135 – 196. Yang, S., Nagamachi, M. & Lee, S. (1999) Rule-based inference model for the Kansei Engineering System. International Journal of Industrial Ergonomics, 24 (5), 459 – 471.
55 Nilshan Devinda
KBS [ J | 601 | 0459]
View more...
Comments