Currently many industries are developing artificial intelligence software and decision matrix protocols to evaluate and determine the best choice of action for a given scenario. In the future probability and complexity will be no match for such tools. One will be able to ask a question and get a relevant and best possible answer within extremely short amount of times. Even NASA scientists are now developing such software, which will be able to evaluate options for mining materials for life support, colony building materials and refueling in lunar factories.
The most advanced of these artificial intelligent decision-making computer software systems can now rate and compare more than five different types of lunar or Martian Base station manufacturing systems and compare components of each for the best possible choices. In the future more and more criteria will be added to insure the best possible decision for the situation. For instance using the Moon as our platform to manufacture in Space to service needs of Manned Mars Exploration.
Indeed, such systems will be good templates for future decision matrix artificial intelligent systems, which NASA can use to determine how to best use the materials, elements and compounds on other planets too, as mankind expands their horizons. With NASA using such AI decision programs to determine the best systems, which by the way they are now designing these things to make Business Decisions too; NASA should be able to evaluate the choices without the human politics of choosing systems.
Often when you mix politics, science and business you are asking for problematic situations in the bidding and design contracts, which are inherently corrupt; IE people, humans involved. Those who design such AI decision systems will need to consider the manipulation of criteria and how even those who exhibit the greatest level of integrity might justify it as the human mind of an individual is looking for financial gain or scientific status among peers.
These decision making matrix systems can take the “human element” out of such decisions and thus allow the negative innate characteristics of the species to screw up lesser important decisions, yet still feel in control for piece of mind. Undoubtedly those who program such systems will need to consider in advance the human animosity as they question the decision process and the AI systems decision?
Can humans design a system to make decisions that they will trust and that they will believe? Will these decision matrix systems stand the test of human being scrutiny? Human psychology predicts that if a human does not have a way out and has something to prove to save face or needs to be duly respected to fulfill personal desire that there will be issues with AI decision-making? Perhaps the biggest question maybe the interaction aspects as humans learn to trust such systems, without attempting to manipulate them to serve their will at the expense of the mission. Think on this.