Hiroshi Yamakawa, the chairperson of WBAI, was interviewed by Future of Life Institute (FLI) and its content was published on line with the title “Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa” (in English (summary, full text) & Japanese) on October 23, 2017.
The following are major questions in the interview, including topics on WBA and WBAI:
- Why did the Dwango Artificial Intelligence Laboratory make a large investment in [AGI]?
- What is the advantage of the Whole Brain Architecture approach?
- You also started a non-profit, the Whole Brain Architecture Initiative. How does the non-profit’s role differ from the commercial work?
- What do you think poses the greatest existential risk to global society in the 21st century?
- What do you think is the greatest benefit that AGI can bring society?
- Assuming no global catastrophe halts progress, what are the odds of human level AGI in the next 10 years?
- Once human level AGI is achieved, how long would you expect it to take for it to self-modify its way up to massive superhuman intelligence?
- What probability do you assign to negative consequences as a result of badly done AI design or operation?
- Is it too soon for us to be researching AI Safety?
- Is there anything you think that the AI research community should be more aware of, more open about, or taking more action on?
- Japan, as a society, seems more welcoming of automation. Do you think the Japanese view of AI is different than that in the West?