当NPO法人の代表である山川宏氏が米国Future of Life Institute(FLI)からインタビューを受けた記事(日本語版、英語(要約版フルバージョン))が「Understanding Artificial General Intelligence — An Interview With Hiroshi Yamakawa」というタイトルで先日10月23日に掲載されました。


  • Why did the Dwango Artificial Intelligence Laboratory make a large investment in [AGI]?
  • What is the advantage of the Whole Brain Architecture approach?
  • You also started a non-profit, the Whole Brain Architecture Initiative. How does the non-profit’s role differ from the commercial work?
  • What do you think poses the greatest existential risk to global society in the 21st century?
  • What do you think is the greatest benefit that AGI can bring society?
  • Assuming no global catastrophe halts progress, what are the odds of human level AGI in the next 10 years?
  • Once human level AGI is achieved, how long would you expect it to take for it to self-modify its way up to massive superhuman intelligence?
  • What probability do you assign to negative consequences as a result of badly done AI design or operation?
  • Is it too soon for us to be researching AI Safety?
  • Is there anything you think that the AI research community should be more aware of, more open about, or taking more action on?
  • Japan, as a society, seems more welcoming of automation. Do you think the Japanese view of AI is different than that in the West?

なおFLIは人工知能をはじめとした人類の存続を脅かす危機を緩和するために活動を行っており、Skype共同創業者Jaan Tallinnやマサチューセッツ工科大学の宇宙学者マックス・テグマークらよって創設されたボストンに拠点をおくボランティア運営の研究支援団体です。


Leave a Comment


Email* (never published)


このサイトはスパムを低減するために Akismet を使っています。コメントデータの処理方法の詳細はこちらをご覧ください