A service developer can create a service using one or more computing devices in a first “developer” network and deploy the service into a second “production” network, where it can be hosted by one or more servers. For example, the service may be deployed in a data center, in a third-party network, a third-party multi-tenant network, or any other type of deployment network. Networks providing computing resources for a deployment are sometimes referred to as a “cloud” or cloud services network, particularly (but not limited to) third-party multi-tenant networks where the infrastructure is maintained by the third-party. Service clients may access the service in the production network from their own local networks, but might be blocked or otherwise prevented from accessing a service instance in the developer network.
A computing device (e.g., a server) provides services to other computing devices (e.g., a client) in a network by accepting network communications (e.g., packets) addressed to the server where the received communication is handled by a process (e.g., a service daemon) executed by the computing device. The process obtains the received communication by monitoring (“listening on”) a communication port specific to the process or specific to a protocol used by the process. Generally, transport layer protocols include a field for designating a destination communication port by port number.
Typically, a service listens on a port assigned to the service or a port assigned to a protocol associated with the service. The Internet Assigned Numbers Authority (“IANA”) maintains a “Service Name and Transport Protocol Port Number Registry” assigning specific port numbers to various transport protocols. For example, the Hypertext Transport Protocol (“HTTP”) uses port 80. However, service processes may actually use any port, or multiple ports, including unassigned ports. For example, a developer of a new service or a custom service might request registration (with IANA) of one or more unassigned port numbers.
Network service providers can (and do) either restrict communications to only allow communication to a set of authorized ports (a “white list”) or to block communication to a set of prohibited ports (a “black list”), e.g., using a firewall. This restriction effectively blocks access to a service that listens on a port that is not in a set of authorized ports (or inversely is in a set of prohibited ports). A computing device running a service that listens on a blocked port behind such a firewall cannot be reached by clients beyond the firewall. However, because firewalls generally allow internal devices to send data communications on any port, and allow external devices to respond to those data communications on the same port, a computing device behind a firewall might listen to blocked ports for responses to communications initiated at the computing device. This is known as “piercing” the firewall. However, this requires the device behind the firewall to initiate the communication, which a service generally doesn't do. Instead, a process might listen for new session requests on a first port (e.g., port 80 for session requests established using HTTP) and respond with instructions to a client to use a different port (e.g., port 81). The client then pierces the firewall on the second port. Firewall restrictions can be particularly problematic for a developer live testing a local instance of a service that will be later deployed outside the firewall.
Some developers wish to live test services prior to deploying them. A live test is a test in which actual service clients are able to use the service. The service clients may be “beta testers” who know that they are using a test service or they may be randomly selected “canary” testers who are diverted from a production version of the service to a test version without overt notice. Because there may be administrative delays, integration requirements, costs, and other concerns with deploying a test instance of a service into the production network, a developer may want to run the test instance in the developer network. However, the technical problems with running the test instance include access issues with network security (e.g., firewall circumvention), redirecting test clients (knowing beta testers or unknowing canary testers) to the test instance, and managing temporary integration between the test instance and any additional production resources used by the test instance (e.g., accessing live databases in the production network).
These and other technical problems are addressed by the subject matter described.