As handheld electronic devices, such as mobile telephone handsets, electronic game controllers, and the like, increase in prevalence and increase in processing power, displays for such devices are becoming larger, more complex, and more power-hungry. For example, many existing electronic devices are equipped with touch-screens to facilitate the entry of input despite the size-constrained nature of the associated devices. However, touch-screens and similar input mechanisms utilize a large amount of power for both output (e.g., lighting) and input activity, which results in reduced battery life for devices that utilize such mechanisms. Further, existing electronic devices generally rely on an activity-based and/or time-based mechanism to determine whether to provide lighting to a device display, which can result in additional excess power usage during periods where a user is not actively engaged in viewing the display and/or otherwise actively using the device.
In addition, due to the limited form factor of handheld electronic devices, controls (e.g., buttons, dials, etc.) for such devices are traditionally either optimized for only one of left hand use or right hand use or configured such that manual intervention is required to change from a left-handed orientation to a right-handed orientation or vice versa. As a result, traditional handheld device controls can lead to reduced usability, a loss in functionality, and/or potential safety risks (e.g., safety risks caused by a user being required to swap hands while driving). While some existing electronic devices utilize mechanisms such as level and/or orientation sensing for control adaptation, these existing mechanisms do not perform well in scenarios where a device is held substantially vertical or flat. Accordingly, it would be desirable to implement input/output mechanisms for handheld devices that mitigate at least the above shortcomings.