Reading the Expert's & AI's Minds (1)


Unfortunately, the coronavirus pandemic in Japan is still serious. In Tokyo, approximately 150 people turned out they got infected everyday. The total number of infected people nationwide exceeds 10,000. Currently, the number of death is 224.

However, this record is suspicious. Because the Japanese government and medical community are extremely afraid of the collapse of the medical system, they limit the opportunity for Japanese people to take medical exams. Only the patients with serious health condition are allowed to take PCR exams. Therefore, the real numbers of the infected patients (and deaths) would be much higher than the statistical data.

Though the Japanese government announced the state of emergency, there is no strict guideline to stay at home. Also, employers and employees of the corporations cannot expect financial support from the government. Therefore, many people choose job outside home. Some schools and universities have been closed.

Anyway, more and more Japanese will be infected with the virus and show symptoms in the future. As a Japanese citizen, I must admit the fact that the Japanese government and people are completely lack of risk & crisis management....

By the way, as I already posted before, I currently undertake two big research projects. The one is about cognitive mechanism of judgment & decision-making of the expert. Fortunately, I have a wonderful opportunity to study with the Hokoshinkai, one of the largest acupuncture society in Japan. We are now investigating how well-experienced acupuncture doctors work their logic and intuition to diagnose the patients. This is a very unique, special, and practical research topic for us.

The other is about Explainable Artificial Intelligence, XAI. As you know, XAI is a hot research project conducted by the Defense Advanced Research Project Agency (DARPA). One of the most significant aspect of XAI is that XAI can show its thought process and work as decision aids for human decision-makers.

However, some questions are there: 'How do humans explain about something such as happenings, phenomena, action, etc?' 'How do humans interpret explanation?' 'How can AI researchers & cognitive psychologists instill human-like thought processes into XAI?' etc....

The key point of XAI research is human-centered computing. It means the decision-makers are human, not AI. AI serves humans as decision aids. Also, many people do not recognize the fact that the Naturalistic Decision-Making (NDM) community has greatly contributed to the DARPA's XAI project. Without understanding cognitive mechanism of how humans explain about something and how humans understand explanation, AI researchers cannot develop useful XAI.

There are several information sources I strongly recommend. One of them would be the presentation, 'Explainable AI', by Dr. David W. Aha of the Naval Research Laboratory. He explains the basic mechanism of XAI and the future direction of the research project. In closing of the presentation, he introduced Dr. Robert Hoffman who has contributed to the project through cognitive psychological approach. Dr. Hoffman is a leader of the NDM community. He always shares useful information with me.

You can watch the presentation via YouTube.

Sky Business JPN Title.png

©2020 Dr. Jun Nara, Sky Business, Co. Ltd.. All rights reserved.