A semiconductor integrated circuit (IC) has a large number of electronic components, such as transistors, logic gates, diodes, wires, etc., that are fabricated by forming layers of different materials and of different geometric shapes on various regions of a silicon wafer. The design of an integrated circuit transforms a circuit description into a geometric description called a layout. The process of converting specifications of an integrated circuit into a layout is called the physical design.
Chip designers often use electronic design automation (EDA) software tools to assist in the design process. Chip design using EDA software tools generally involves an iterative process whereby the chip design is gradually perfected. A top-down design methodology is commonly employed using hardware description languages (HDLs), such as Verilog or VHDL for example, by which the designer creates an integrated circuit by hierarchically defining functional components of the circuit, and then decomposing each component into smaller and smaller components.
The various components of an integrated circuit are initially defined by their functional operations and relevant inputs and outputs. From the HDL or other high level description, the actual logic cell implementation is typically determined by logic synthesis, which converts the functional description of the circuit into a specific circuit implementation. The logic cells are then “placed” (e.g., given specific coordinate locations in the circuit layout) and “routed” (e.g., wired or connected together according to the designer's circuit definitions). The placement and routing software routines generally accept as their input a flattened netlist that has been generated by the logic synthesis process. This flattened netlist identifies the specific logic cell instances from a target standard cell library, and describes the specific cell-to-cell connectivity.
In the area of custom designs, the user can directly and interactively conduct placement and routing in a manual approach, e.g., by manually manipulating the location of objects. In addition, after an initial placement/routing process that may have undergone either auto-placement or interactive placement, the layout may still undergo additional interactive changes to the existing placement of objects, by potentially inserting or moving objects in areas of the design where other objects have already been placed.
When an object is moved into an area of the design that is already populated with other objects, the other objects already in that area may need to be displaced to make room for the new object. The conventional approach to address this issue is a sequence of manual steps, where the user selects objects that have already been placed into the target location, and moves them to create space for the objects being moved. This may include the manual action to move the object(s) to the target location. In particular, the sequence of steps includes manual actions to interactively re-place the moved objects. However, this conventional approach is subject to numerous problems. First, this approach requires numerous manual steps to be performed by the user for each object. In modern electronic designs that include large numbers of components placed onto a layout, this may require a significant amount of manual steps for a large number of objects that potentially need to be moved. In addition, there are challenges to being able to identify DRC (design rule checking) correct locations for the moved objects. The above difficulties are further magnified when it is considered that modern designs with advanced nodes includes very high density instances, as well as having very complex spacing and object snapping rules.
Therefore, there is a need for an improved approach to implement placement to address at least the aforementioned shortfalls of conventional approaches.