| Literature DB >> 34108699 |
Azalia Mirhoseini1, Anna Goldie2,3, Mustafa Yazgan4, Joe Wenjie Jiang5, Ebrahim Songhori5, Shen Wang5, Young-Joon Lee4, Eric Johnson5, Omkar Pathak4, Azade Nazi5, Jiwoo Pak4, Andy Tong4, Kavya Srinivasa4, William Hang6, Emre Tuncer4, Quoc V Le5, James Laudon5, Richard Ho4, Roger Carpenter4, Jeff Dean5.
Abstract
Chip floorplanning is the engineering task of designing the physical layout of a computer chip. Despite five decades of research1, chip floorplanning has defied automation, requiring months of intense effort by physical design engineers to produce manufacturable layouts. Here we present a deep reinforcement learning approach to chip floorplanning. In under six hours, our method automatically generates chip floorplans that are superior or comparable to those produced by humans in all key metrics, including power consumption, performance and chip area. To achieve this, we pose chip floorplanning as a reinforcement learning problem, and develop an edge-based graph convolutional neural network architecture capable of learning rich and transferable representations of the chip. As a result, our method utilizes past experience to become better and faster at solving new instances of the problem, allowing chip design to be performed by artificial agents with more experience than any human designer. Our method was used to design the next generation of Google's artificial intelligence (AI) accelerators, and has the potential to save thousands of hours of human effort for each new generation. Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.Entities:
Year: 2021 PMID: 34108699 DOI: 10.1038/s41586-021-03544-w
Source DB: PubMed Journal: Nature ISSN: 0028-0836 Impact factor: 49.962