Monday, 3 December 2012

Ants and Bees


ANTS
Ants find the shortest path from their nest to food and adapt to changes in environment all without visual sensing. They will instinctively go for the path with the strongest pheromone and will utilise information gathered from previous expeditions to guide searching when pheromones have deteriorated. This is how they cope with environmental changes. They do this by using multi agent cooperation and use pheromones to modify the environment for indirect communication.
Technique
·         Leave the nest
·         Check for pheromone
·         Follow path with strongest pheromone OR check information to fined the last place there was food AND take the shortest route (shorter routes should have more pheromones)
·         Walk in that direct leaving pheromone as you walk
·         Check the path travelled to count the length
·         Find food and return to nest
Using an ant algorithm allows for minimum individual intelligence and self organisation to complete complex tasks.

BEES
Another algorithm inspired by insect behaviour is the bee algorithm. Developed in 2005 its goal is to work out the best solution to a problem. Its essentially a form of swarm/herd  intelligence because it mimics a group of individuals working together to solve a problem. It can be very useful in finding the best option when there are lots of possibilities to choose from.
Scout
Send out some scouts that randomly search trying to find things that will be useful (relevance, effectiveness etc.), they then goes back to tell the hive which then decides if it's worth taking a better look based on which is most likely to have the best resources out of the array.

Monday, 5 November 2012

Scripting


Why use a scripting language instead of hard coding it in to the engine?
Well most game logic can be scripted keeping the game code and the engine code separate. This way its easier to find stuff and modify plus you can re-use the engine code for another game. There are other pluses too. In the industry it can take quite a while to see your changes in the game because you would have to recompile the entire game each time (heavy rain had 4 million lines of code). But if you use scripting you can see the effects in game immediately. Even more efficient is the possibility to teach non programming roles how to modify bits of code so they can tweak it themselves without having to involve a programmer.

Making realistic actors
In a film or book you have no control over the content and so there is no agent. You watch the characters make decisions and stick to their roles which gives the illusion of personality. However in a game you have more control due to the interactive nature of the gameplay. This means your the agent, you make all the decisions which is why, even though the characters history might be known, you do not have any attachment to the character your playing. In the Sims a mother is playing with her baby. You tell the mother to feed the baby. The mother then puts the baby on the floor (to end the last animation) and then picks the baby up to feed him (beginning the next animation). This is very unbelievable!

So how can we make believable actors if they don't make any decisions?
The key to this is believability. Although it must be understood that there is a difference between realism and belief. Bugs Bunny is a believable character but he certainly isn't realistic. The idea is if you can imagen what that character could be doing now then it must have a believable personality. However if you can only think of the character as a empty husk then the believability has obviously failed. Again if you have made all the decisions for the character than you have nothing to latch onto about the characters personality. You have to be able to create characters that feel like they have a responsibility over their own actions and yet allow us to also be responsible.
One way to fix this is by using procedural movement and behavior. Combine multiple movements by overlapping the animations to produce a more believable action. A good example would be to walking and reaching for something in a cupboard rather r than walking to the cupboard, stopping, then reaching for the door. You can also use pseudo-random movement to create a continuous presence. Such as shifting weight or blinking. The point is use more non linear animations.

Monday, 15 October 2012

Finite State Machines


I read AI for Game Development, the chapter on finite state machines (a book in the recommended reading list). In which it explains a simple model for a state machine depicting the ghosts from Pac-Man. I decided to update the example using a situation common to a more modern game. Imagine a guard patrolling a camp. His behaviour can be depicted using this same FSM model. 



There are 3 states and 8 transitions. When all the guards in the camp have this kind of behaviour it can appear quite complex and give the illusion of intelligence. If we assume the patrol state is the default state which will continue while there are no enemies, when a new enemy is seen it will switch to the attack state and remain there while the enemy is close by. Once he has lost too much health he will then call for back up by switch to the final state. If his health has regenerated enough or he has been healed etc,  he will then switch back to the attack state and continue the battle. Once all enemies are disposed of the guard will return to the default state to patrol his camp.

Monday, 1 October 2012

Can we use the Turing Test for Games



Although the Turing Test is known to be a measure of human intelligence it is never referred to as comprehensive anymore. Instead it is thought of as insufficient to cover all aspects of what we now consider human intelligence to be. However games are not required to actually be intelligent rather they demonstrate the illusion of intelligence to the player(s). For this reason any result of an AI test in a game setting should not search for a result that indicates its intelligence but instead should look for overall gameplay improvement. Therefore a Turing style test focusing on believability rather than intelligence could be applicable in the game industry assuming it was part of a more complete test.
Games are currently known for their ‘bad’ AI where situations occur like killing a patrolling guard who dies loudly but doesn’t attract the attention of the guard a few feet away. Or a persistent opponent that attacks your base in the same place each time despite being constantly defeated. These unrealistic conditions can take the player out of the game but implementing behaviourism could help with this. Using neural networks as a method to train agents  or ‘condition’ them may be a solution. AI opponents can then learn from their mistakes and attack bases from other angles to avoid troop loss for example. Behaviourism relies on the theory that only the observable behaviour should be analysed rather than the internal thought process behind the behaviour. This is useful in the industry because all the player sees is the behaviour, we don’t necessarily need to know why the AI reacted a certain way, just that it did. Behaviourism is currently being used to train speech/conversation AI to beat the Turing test and win the Loebner prize and is measured by using lexical semantic analysis. So as academic research progresses so might the AI in games. Although this technique isn’t frequently used in the game industry yet, it is becoming more common.
Theoretically any game can be tested on this kind of test because every game is a form of imitation game. For example can a player can tell the difference between an AI and humans when running around a virtual world. Or can a player distinguish whether it is playing against a human or an AI opponent in a strategy game. But its not as simple as that. The problem with testing for believability is that it’s subjective. People can have altered perceptions based on cultural differences. As discovered by Mac Namee, 2004 who came across the following scenario:
‘In one test, subjects are shown two versions of a virtual bar populated by AI agents. In one version the behaviour of the agents is controlled in a deliberate manner because they try to satisfy long-term goals – buy and drink beer, talk to friends, or go to the toilet. In the second version the agents randomly pick new short-term goals every time they complete an action. One Italian subject felt that having agents return to sit at the same table time after time was unrealistic, whereas the other (Irish) subjects mostly felt that this behaviour was more believable. Sadly, further tests to determine whether this – or other – results are indeed culturally significant have yet to be carried out; the possibility that such differences exist does appear quite likely, however.‘
Similarly Mac Namee also points out that players new to the game can have a different experience of the AI due to lack of exposure of the game environment compared to a veteran of the game. The veterans know exactly what their looking for and have more time to concentrate on the AI  because other game related skills are second nature.
It is possible that the player can be concentrating on the game so much that they don’t notice the AI. Or in some genres they may not get enough interaction with the AI to make a firm judgement. A way to fix this is by making the AI more obvious. Butcher and Griesemer (2002) found that players need to be given exaggerated visual cues in order to even notice the AI. But surely if the AI is exaggerated and for example over emotive it becomes less realistic and less human? On the other hand it is possible to overdo the AI and confuse the judge by over exaggerating a common mistake that humans experience. This is exactly what McGlinchey and Livingstone (2004) found when their Pong playing AI moved the bat in a slow, jerky way (by accident) players often mistakenly thought it was human. Although some people suggest having an observer as a judge to alleviate these issues it should be remembered that the aim is to please the player not the people watching.
Laird and van Lent, 1999 conducted a test in which the observers judged of players of mixed skill play against the Soar Quakebot and found that the opponent was considered ‘more human’ if it had a slower reaction speed, an inaccuracy when shooting a target and tactical reasoning. (However none of them thought the AI was actually human). The results of this test led them to devise 3 simple principles:
·         AI should have human like reaction speeds not super-fast
·         AI should not have ‘superhuman’ abilities like overly accurate aiming
·         AI should have some kind of strategic reasoning so it doesn’t just react to each situation
One of the obvious flaws of the original Turning test was that it gives a very black or white result. It either passed or it hasn’t. But in a creative industry there needs to be more flexibility. There have been attempts that score the AI on a scale of different features.  Using the following parameters formulated by Daniel Livingstone by analysing game AI for common ‘bad AI’ situations it could be possible to devise a series of tests of believability to cover different aspects.
·         Plan
o   demonstrate some degree of strategic/tactical planning
o   be able to coordinate actions with player/other AI
o   not repeatedly attempt a previous, failed, plan or action
·         Act
o   act with human-like reaction times and abilities
·         React
o   react to players’ presence and actions appropriately
o   react to changes in their local environment
o   react to presence of foes and allies
It seems to me that a Turing style test has many flaws and many researchers seem to advocate it so much that they are just desperate for it to work. Would it be accurate to say the only reason this kind of test is such mainstream knowledge is simply that there is no alternative to replace it? Maybe researchers should stop trying to find a modification that works and start again in the search for a suitable comprehensive AI test. In conclusion I feel that a Turing style test is inappropriate for use in a game sense, even as a part of a more complete test, because of its problematic testing parameters and ambiguity of results.

BUTCHER, C. AND GRIESEMER, J. 2002. The illusion of intelligence: The integration of AI and level design in Halo. Presented at the Game Developers Conference (San Jose, CA, March 21-23, 2002).
LAIRD, J. E.  AND  VAN  LENT, M.  1999. Developing an artificial intelligence engine. In Proceedings of the Game Developers Conference  (San Jose, CA, Nov. 3-5, 1999).
LIVINGSTONE, D. 2006. Turing’s Test and Believable AI in Games. (PDF) Last accessed: 26/09/12. Available online at: http://www.acso.uneb.br/marcosimoes/Arquivos/IA/games_3.pdf
MAC NAMEE, B. 2004. Proactive persistent agents: Using situational intelligence to create support characters in character-centric computer games. Ph.D. dissertation, Dept. of Computer Science. University of Dublin, Dublin, Ireland
MCGLINCHEY, S. AND LIVINGSTONE, D. 2004. What believability testing can tell us. In CGAIDE, Proceedings of the Conference on Game AI, Design and Education  (Microsoft Campus, Redding, WA).
TURING, A. M. 1950. Computing machinery and intelligence. Mind LIX, 236  (1950), 433-460.
WETZEL, B. 2004. Step one: Document the problem. In Proceedings of the AAAI Workshop on Challenges in Game AI (San Jose, CA, July 25-26, 2004).