With the evolution of modern data communications networks, vast amounts of digital content can now be readily transferred amongst end users, media content providers, and network service providers, at relatively high data transfer rates at almost any geographic location. Whether digital content distribution occurs over wireline networks, such as fiber-optic and cable networks, or over wireless networks, such as 3 G, WiMAX, LTE, LTE Advanced, or other 4 G cellular networks, the task of increasing communications service capability and maximizing the utilization of existing network communications resources remains a key objective for most network service providers.
Consumer exposure to state-of-the-art digital media content distribution and playback technologies (e.g., tablet computers, netbook and laptop computers, multi-function cellular phones, personal desktop assistant (PDA) devices, electronic book devices, portable gaming devices, etc.) has created a significant demand for improved digital content delivery capability and services. Unfortunately, most service providers have struggled to provide sufficient communications infrastructure to keep up with this growing consumer demand. Presently, there are many different types of data communications networks available that can function independently (e.g., as Local Area Networks or LANs) or collectively as part of a group of interconnected networks (e.g., Wide Area Networks or WANs), such as the World Wide Web. Some of these networks include technologies that facilitate relatively fast, high data rate transmissions (e.g., fiber-optic, cable, and digital subscriber line (DSL) networks), while others can only facilitate much slower data rate transmissions (e.g., 3 G cellular networks). Regardless of a network's type, topology, or employed technologies, almost all modern-day networks are susceptible to congestion or degradation due to a high demand for transferring an alarming amount of digital content between and amongst various network nodes.
As would be understood by those skilled in the art, network congestion generally refers to a state of data transfer overload (a load that burdens network capacity) between links in a data communications network. These heavy loads typically degrade a network's Quality of Service (QOS) and network users' Quality of Experience (QOE). Some negative effects of network congestion, affecting QOS/QOE, may include queuing delay, packet loss, and the blocking of new and existing connections.
Mobile broadband services are becoming very popular in modern society, and almost every teenager and adult in the U.S. owns at least one wireless communications device (e.g., a cellular phone or PDA device). These services can provide a way for individuals to stay connected to the Internet while operating within and roaming between various wireless coverage areas. A concurrent trend is the huge increase in applications and media content distribution services that can facilitate the delivery of large, burdensome media content files to or from user equipment. Large media content file transfers have the signature feature of consuming significant amounts of network resources (i.e., channel bandwidth) over extended periods of time. Further, as is the case with many portable wireless computing devices, these burdensome media content deliveries often consume significant amounts of device resources (i.e., battery power, processor power, volatile and non-volatile memory, etc.). Methods of enabling and making this particular data type delivery less burdensome to the network and its communicating devices are very important to both end users, network providers and service providers. One factor that should be considered in determining how to mitigate congestion and network resource waste is tracking user specific resource usage patterns. This information can be analyzed and utilized in developing improved solutions for burdensome media content delivery.
It has long been observed that humans are creatures of habit. Our daily lives are filled with routines and schedules that rarely vary much within set boundaries. As an example, during a given work week, a typical commuter may travel, more or less, the same physical path from home to work at around the same times of day. As another example, the use of expendable resources that people use on a daily basis is often very predictable. Individuals turn household devices on or off at similar times of the day and they also tend to use communications and computing devices at predictable times and locations (e.g., utilizing both wireline and wireless broadband communications networks). Each of these resource usage events generally fall in line with an individual's daily resource usage patterns, which are defined by personal tendencies, habits, and/or routines.
By way of further example, consider modern computing networks. People tend to use computing resources in defined patterns, which may be based on hourly, daily, weekly, monthly, and even annual usage activities, owing to relatively fixed commuting, work, and lifestyle patterns. However, as individuals move about, it is rarely the case that their access to computing networks is of constant quality and/or efficiency. For example, a user may commute to work each day by train or by bus, and that individual may routinely utilize their laptop computer or PDA device to connect to a particular wireless cellular network (e.g., a 3 G or a 4 G cellular network) to check their work and personal email accounts, while in transit. However, when the user arrives at their office, he or she may routinely connect to the office's local area network to continue using the Internet on a more robust network type (e.g., a fiber broadband network). At the end of the day, the same user may commute back home, temporarily resuming communications via a cellular network (while in transit), and then connect to their personal broadband network after arriving back home (e.g., a cable or WiFi™ network). Each of these networks is likely to have different capacity and capability characteristics, with which its users can utilize to access desired network services. When considering media content deliveries, it would be very beneficial to know, in advance, which network and device resources would likely be available to a user within a predefined period of time from a present instance.
In general, what might be deemed burdensome to one network type (e.g., a 3 G cellular network), such as sending or receiving a large, burdensome data file, may be relatively less so for another type of network (e.g., most fiber or cable networks). Accordingly, it would be beneficial to be able to optimize network usage based on steering burdensome resource usage towards networks that are relatively better suited to handle data delivery tasks that are determined to be burdensome for one or more network resources. To facilitate this functionality, it would be desirable to be able to determine the capabilities of presently available networks as well as to be able to predict soon-to-be available networks (based on individual historical usage patterns). If burdensome resource usage could be allocated to more capable networks, the capacity of relatively less capable networks could be preserved for higher priority data communications tasks, such as voice data communications. This would benefit the users of less robust networks by reducing congestion and thereby improving QOS and a collective network users' QOE.
Another example of how resource conservation could be achieved is by monitoring user device resources (e.g., battery power, processing power, available memory, transmitter status such as 3 G/WiFi, etc.) in conjunction with network resource usage patterns. This is particularly important for modern portable user equipment having media download and playback capability (e.g., tablet computers, netbook and laptop computers, multi-function cellular phones, PDA devices, electronic book devices, portable gaming devices, etc.). If it were known in advance that 1) a user equipment were low on some predefined device resource(s) (e.g., when the device was in a low battery power state); 2) the user equipment was currently roaming; and 3) the user equipment would likely be at a home location within a short period of time (where it could be docked/charged), it may be beneficial to defer or throttle media content deliveries in order to conserve device resources until a time when the user equipment was in a charging state. This could allow a user equipment to delay burdensome operations until a time when battery power could be conserved, such that device battery life could be prolonged and a user could be spared the frustration of running out of battery power while roaming.
This coordinated conservation could reduce the effect that the media content transfer would have on the user equipment during periods when one or more resident device resources was in a state of resource exhaustion (e.g., low battery power, an overburdened processor, or reduced free memory, etc.). By selectively coordinating data content deliveries towards periods when resident device resources are not in a reduced state, more important processes supported by the user equipment (e.g., voice communications, texting, web browsing, etc.) could be prioritized, until a time when sufficient resources become available (e.g., when a user equipment is plugged into a local power supply) for lower priority media content delivery tasks.
Another example of the need for correlating usage patterns to resource usage relates to how communications networks are provisioned to handle periods of peak loading. In general, cellular communications networks are allocated resources based on observed, fixed usage patterns relating to peak periods of network user activity. What is needed is a way of refining these estimates in real time based on determined usage patterns and the ability to predict and proactively make decisions about resource allocation for a given network cell or a group of cell sites. This could allow a network service provider to learn, in advance, when one or more network cells may need additional resources to handle a forecast load.
Accordingly, it would be beneficial to have improved systems and methods for data content delivery that could optimize resource usage by analyzing historical usage information in order to direct resource usage toward times when resource consumption is relatively less burdensome to a particular network or device resource. It would further be advantageous if these systems and methods could operate by automatically detecting, coordinating, and delivering burdensome media content to one or more end receiving device(s), such that a typical user would be unaware of how these underlying resource optimization processes functioned. As a result, an average network user's QOE should improve, while the underlying processes facilitating the improvement would remain transparent. It would further be desirable if these systems and methods could provide predictive alerts to the user or to autonomous resource managers when a proposed resource use is likely to exceed a resource threshold or require additional capacity. This would facilitate a monitoring entity making important, real-time decisions about how to best utilize limited resources based on predefined rules and/or priorities, as well as user preference. These solutions would require observing and recording habitual usage patterns to achieve resource conservation goals that depend on being able to predict where and when users are likely to consume network and device resources.
By aligning media content delivery sessions with historical network usage, service providers would be able to maximize network resources at all times and to prioritize some data communications processes over others (e.g., bulk media content transfers would typically be lower priority data transfers). It would also be helpful if these systems and methods facilitated real time monitoring of resources, such that when local resources (e.g., battery power, processor usage, available memory, etc.) were in a state or resource exhaustion, a media content delivery could be throttled or halted until the resources were replenished or otherwise became available to the user equipment. These dynamic solutions could be utilized to mitigate situations where large media content deliveries would otherwise degrade or impair communications for networks having lesser resource availability.