In-memory data platforms such as IMDGs (In-Memory Data Grids) can serve as an operational store for applications that require low latency access to data at scale. Many in-memory data platforms store large datasets and include a cluster of servers forming a grid.
Many applications leveraging such in-memory data platforms store, manage, and analyze security-sensitive data, and it would be desirable to protect such data from unauthorized user and system access, for example. Access control oftentimes is based on authenticated user credentials. Sometimes it is desired that contextual information associated to the caller (e.g., user or device requesting access to the data) is also used in making the access control determination.
Currently available in-memory data platforms typically access an external server to look up authorization information for users requesting access to data stored in the in-memory data platform. Thus, additional latency typically is incurred when policy-based authenticated access is required. It will be appreciated that it would be desirable to optimize the speed at which data in in-memory data platforms is accessed, e.g., by reducing the time associated with looking up such access policies.
In order to improve the speed of access to data stored in tiered memory such as in in-memory data platforms, certain example embodiments described herein provide a highly-optimized approach to authorizing and protecting in-memory data based on a requestor's role and contextual information. Certain example embodiments help optimize or otherwise improve role-based access control without compromising the predictable (and generally low) latency provided by in-memory based data access by bringing the access policies closer to the requestor process space.
In certain example embodiments, there is provided a system for providing controlled access to data stored in a plurality of memory tiers, including a volatile local in-process (L1) cache memory of an application executing on a first computing device and at least one managed (e.g., non-volatile) in-memory (L2) cache on a second computing device, distributed over a plurality of computing devices. The system includes one or more communication interfaces. A processing system includes at least one processor, with the processing system being configured to control the system to access the L1 cache memory and to access the at least one L2 cache via the one or more communication interfaces, and to perform operations comprising: receiving an access request, via the one or more communication interfaces, requesting access for a user to a data element in the memory tiers, wherein the L2 cache has stored therein the data element and an access policy; detecting whether a copy of the data element is in the L1 cache memory; in response to a detection that a copy of the data element is not in the L1 cache memory, copying the data element and the access policy from the L2 cache to the L1 cache memory and providing the user with access to the copy of data element from the L1 cache memory if the access policy allows access to the user; and in response to a detection that a copy of the data element is in the L1 cache memory, determining, by referring to a copy of the access policy stored in the L1 cache memory, whether the user is allowed to access the data element, and, in response to a determination that the user is allowed to access the data element, providing the user with access to the copy of the data element from the L1 cache memory.
In certain example embodiments, a method for providing controlled access to data stored in a plurality of memory tiers distributed over a plurality of computing devices is provided. The tiers include a volatile local in-process (L1) cache memory of an application executing on a first computing device and at least one managed (e.g., non-volatile) in-memory (L2) cache on a second computing device. An access request that requests access for a user to a data element in the memory tiers is received via one or more communication interfaces of the first computing device. The L2 cache has stored therein the data element and an access policy. A detection is made as to whether a copy of the data element is in the L1 cache memory. In response to a detection that a copy of the data element is not in the L1 cache memory, the data element and the access policy is copied from the L2 cache to the L1 cache memory and the user is provided with access to the copy of data element from the L1 cache memory if the access policy allows access to the user. In response to a detection that a copy of the data element is in the L1 cache memory, a determination is made, by referring to a copy of the access policy stored in the L1 cache memory, as to whether the user is allowed to access the data element, and, in response to a determination that the user is allowed to access the data element, the user is provided with access to the copy of the data element from the L1 cache memory.
In certain example embodiments, there is provided a non-transitory computer-readable storage medium having stored thereon instructions that, when executed by a processor of a first computing device, causes the first computing device to provide controlled access to data stored in a plurality of memory tiers, including a volatile local in-process (L1) cache memory of an application executing on the first computing device and at least one managed (e.g., non-volatile) in-memory (L2) cache on a second computing device. The instructions involve operations comprising: receiving an access request, via one or more communication interfaces of the first computing device, requesting access for a user to a data element in the memory tiers, wherein the L2 cache has stored therein the data element and an access policy; detecting whether a copy of the data element is in the L1 cache memory; in response to a detection that a copy of the data element is not in the L1 cache memory, copying the data element and the access policy from the L2 cache to the L1 cache memory and providing the user with access to the copy of data element from the L1 cache memory if the access policy allows access to the user; and in response to a detection that a copy of the data element is in the L1 cache memory, determining, by referring to a copy of the access policy stored in the L1 cache memory, whether the user is allowed to access the data element, and, in response to a determination that the user is allowed to access the data element, providing the user with access to the copy of the data element from the L1 cache memory.
According to certain example embodiments, the at least one managed in-memory (L2) cache may be non-volatile.
According to certain example embodiments, the access policy may specify at least an entity or at least one type of entity for whom access to the data element is allowed, and the determining may determine that the user is allowed to access the data element based upon evaluating the user as corresponding to the at least one entity or the at least one type of entity.
According to certain example embodiments, the determining may comprise: detecting one or more additional parameters associated with the access request, the additional parameters being different from a unique identifier for the user; and performing the determining based upon the one or more additional parameters and the unique identifier. For instance, the one or more additional parameters may include at least one of device information associated with the user and a location of the user.
According to certain example embodiments, in response to a detection that a copy of the data element is in the L1 cache memory, the determining may include applying the access policy to at least one contextual parameter not expressly included in a payload of the access request.
According to certain example embodiments, copying the data element may include storing the copy of the data element in the L1 cache memory in association with a corresponding key and the access policy.
According to certain example embodiments, the corresponding key and a combination of the copy of the data element and the access policy may be stored in the L1 cache memory as a key-value pair.
According to certain example embodiments, the determining may include referring to the access policy specifically associated with the copy of the data element in the L1 cache memory, and copies of other data elements in the L1 cache memory each may be associated with respective other access policies.
According to certain example embodiments, an in memory data grid (IMDG) may be overlaid in the memory tiers, and the IMDG may comprise the L1 cache memory and the L2 cache.
According to certain example embodiments, the operations may further comprise: updating a copy of the access policy in the L1 cache memory; and after updating the copy of the access policy in the L1 cache memory, causing a second processing system to update the access policy in the at least one L2 cache.
According to certain example embodiments, the determining may refer to the copy of the access policy in the L1 cache memory without referencing the access policy in the at least one L2 cache.
According to certain example embodiments, a second processing system including at least one processor may be provided, with the second processing system being configured to control the at least one second computing device and to perform operations comprising: receiving a request to update the data element and/or the access policy; causing at least the first processing system to lock the copy of the data element in the L1 cache memory; updating the data element and/or the access policy in the at least one L2 cache; and synchronizing the data element and the access policy throughout the system.
These aspects, features, and example embodiments may be used separately and/or applied in various combinations to achieve yet further embodiments of this invention.