Modern software and/or computer applications have become very complex to accommodate the multifaceted needs of the users and increasingly complex computing systems that implement the applications. While the complexity of these applications provides desired functionalities and utilities, the complexity of the applications often causes an overall reduction in performance, and in particular, the speed at which the application operates on a computing device. Many of these applications are intended to run on limited or constrained computing devices (e.g., mobile devices, etc.) and thus, the requisite computational power is not always available to ensure that the application is performing optimally.
To improve computational performances and alleviate competition by application code for computing resources of various limited or constrained computing devices, parallelization techniques may be implemented to distribute workloads among the computing resources of the various computing devices. For instance, software developers may attempt parallelization by using a technique call speculative multithreading. Speculative threading is a dynamic parallelization technique that depends on out-of-order execution to achieve speedup on multiprocessor CPUs. Speculative multithreading, however, involves executing threads before it is known whether the thread will be needed at all. This process of speculative threading may, in turn, use up processing, energy, and memory resources of a computing device that may not need the speculatively executed threads. Additionally, in speculative threading, there is no guarantee that thread that are executed beforehand will, actually, provide the necessary processor speedup to alleviate the slowdown of a processor during execution of an application that is computing resources-intensive. Also, many implementations of speculative threading result in threading with multiple inaccuracies (e.g., issues with correctness of threaded code). Thus, while a speedup of code may sometimes be achieved using such method, at runtime, an application or program associated with the code may exhibit numerous execution issues, such as crashing and the like.
Automatic parallelization is another technique that may be used to improve program execution speed and computational performances. Automatic parallelization often relieves programmers from the manual parallelization process. However, automatic parallelization typically is performed only on source code and currently, do not include processes or techniques for parallelization of other code (e.g., IR/binary code, bytecode, etc.) outside of source code.
Thus, there is a need in the computing processors field to create an improved computer processing technique involving parallelization or the like. These inventions provide such improved computer processing techniques.