WeChat Mini Program
Old Version Features

On DRC Cleanness of Cell Porting for Design Migrations in Foundries and Technologies

Ching-Ying Wang, Chen-Ho Chen, Po-Hsiang Chang, Chien-Yu Hsieh, Ching-Feng Su, Scott Ji,Chien-Nan Jimmy Liu,Hung-Ming Chen

2024 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI TSA(2024)

Natl Yang Ming Chiao Tung Univ

Cited 0|Views0
Abstract
Modern design migration needs fast turn-around time from one node to another to keep products competitive, including standard cell library generation. In such a scenario, we plan to preserve the original topology and inherit the original design intention, as well as predictable performance. Since the conventional handcrafted redesign of the standard cell library requires considerable engineering effort and design time, how to efficiently migrate/port the cell libraries with tedious design rules to follow become crucial. This work exploits the nature of DRC reports generated by the commercial DRC tool and presents an automatic standard cell layout migration framework to efficiently migrate cell library to be utilized in switching foundries and technologies. The experimental results show that the cell layout topology is well preserved in leading foundry 28nm technology and 0.18um automotive high voltage technology, and successfully reaching DRC cleanness.
More
Translated text
Key words
migration,cell layout,standard cell,design rules,corner stitch,constraint graph
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined