Decision-making processes are important because without them, solving complicated problems would be near to impossible. The processes are used in everything from deciding which job you should apply/get hired for, to cars deciding autonomously how to safely drive down the road, and robots that have to decide how to properly pick up an item then sort it into the correct location. The most important factor out of the several decision-making problems is limited information, or more specifically the concept of bounded rationality. This coupled with the development of Artificial Intelligence, leads a very human problem into a grey area of AI/machine issues.A detailed analysis of the interactions between bounded rationality and Artificial Intelligence will show both the limitations and freedoms of both systems. The concept of bounded rationality is based on the idea that rational people dont always have what they need to make the most favorable decisions. This breaks down into three core concepts. The first is that only a certain amount of information and alternative choices are accessible when decisions need to be made. The next two would be the mind has a capacity limited capacity to evaluate and process information that is available, and lastly, only a limited amount of time is available to make a decision. (cite business dictionary.com) Limited information for decisions has consequences for such people as those applying for jobs through AI screeners.Geoffrey Hinton is a professor at the University of Toronto. His team built several multi-layered neural networks in the 1980s, but they lacked the computing power of around 2005-2010. Once they had this capability these artificial neural networks were able to flourish. Three examples of neural nets in action are used in I-phones speech recognition – Siri, Google translate, and Teslas semi-autonomous driving mode. According to Geoffrey Hinton, The brain is a big neural network. If you want to understand something really complicated such as a brain, you should build one (Bloomberg Businessweek).AI is the ability of computers to replicate human thought, or human behavior. This technology is being used in applications ranging from human resources screeners to consumer products, commercial warehouses, all the way to military operations. As shown in the video, Artificial Intelligence: The Robots Are Now Hiring the company Hire Vue has been rating job candidates on micro-expressions, a tone of voice, and their specific words they chose using facial analysis software. Hire Vue sell AI tools to companies looking to hire. These tools act as an assistant human resources screener.Cutting down on the amount of time and effort it takes to find a qualified job candidate. According to the Kevin Parker, the CEO of Hire Vue, their AI software, Removes human biases from the hiring process (WSJ Video).Upon further reviewing AI, these biases cant be entirely removed. Biases are when someones idea(s) are slanted in one specific direction. This disallows the ability to have a neutral viewpoint. Also, from a cultural standpoint, this draws from belonging to ethnic groups, social classes, or religion. This is shown thru the fact that AIs are created from human inputs. All people have biases of some form or another. Whether it is the government against the news organizations, peoples biases against a political affiliation, religion, gender, or advertising. So even if the goal is to remove 100% of all biases, this isnt possible due to the limiting inputs that are given to the AI by its creators. Another perfect example of this limitation would be using facial expressions as a criterion for hiring someone.What facial expressions would make the perfect customer service rep, auto service manager, or possible computer engineer? This wouldnt be the same for everyone. Some people are friendlier and smile more, while others are naturally grumpier and frown more often. The next piece of the puzzle that should be examined is had these AIs been screened by the government for operating within the scope of the law. According to the Bloomberg Businessweek AI can be used in many different types of applications. They are best used in applications where many repetitive actions are needed. The question needs to be, when using AI, is who will be held accountable if the Artificial Intelligence makes the wrong life or death decisions? Specifically, in the case of critical infrastructure, or in the case of military drone strikes where real peoples lives are in the balance.There are many problems that come up when dealing with decision making. The number one issue here is bounded rationality. Within modern society, Artificial Intelligence decision making faces the exact same issue. This concept breaks down into three parts. Those are limited information, limited capacity to evaluate, and limited time to make a choice. All of these are present when either a human or an AI makes a decision.