                With online assessments it is possible to collect a vast amount of assessment and bio data from individuals assessed in the workspace. This data is mainly of value for validation of products and creation of comparison groups.        The following type of statement articulate the comparisons of data that can be made in one sentence (with interchangeable words at key points) and the answers to the “so what” question that this comparison would raise. All words in bold are interchangeable for several other options—this example is for talent acquisition:                    What we can do—We can benchmark the strengths and motives of the people who decline your offer against top performers who accept sales positions in other UK pharmaceuticals companies.            “So what?”—This enables you to identify if you have issues with your Employee Value Proposition and put corrective actions in place if necessary.                        So, use of this data would enable seeing if the candidates with the best potential are lost at offer stage.        Over the last few years it has become increasingly popular and in many cases necessary for organisations to benchmark and compare their current staff and job applicants, both over the years and between organisations and industries.Overall Goal        The goal is to create a single source web application that combines assessment data from assessment platforms. It should also provide an easy to use, modern looking interface where authorised users can access the benchmark data as well as relevant data from their organisation's assessment projects on the same platform, and combine the information to allow for detailed analytics and graphical viewing.Go to Market        The access to this type of benchmarking tool can be both a real value to clients and as a key differentiator. It helps in the following key areas:                    1. Analytics of the data may create news-worthy stories around indexes, industry findings and trends            2. Analytics can tease clients to ask the right questions in their organisations (e.g. are my candidates of a lower calibre than in my competition?), it can also lead on to talent audit service and other exercises            3. Improved business outcome studies and “over time” analytics may add even more value.            4. The benchmarking tool may also provide a unique capability linked to products and services clients have already purchased.                        Clients may be given access to the benchmarking tool as part of a product/platform license or subscription fee deal. Additional charge may apply within the subscription for data access. A charge may be added for transactional clients who would like access (annually, per project or one-off), via subscriptions charges or a pay as you go cost.Use Cases        Depending on the type of organisation, or who in an organisation is interested in the data, there are different needs as to what data they are looking for and how they would use it. Below are some example outlined as use cases:                    A. A graduate recruitment manager in a bank wants to see how the bank's candidates this year compare with last year or with the rest of the industry and competition when it comes to scores on a numeric reasoning test.                            The user logs on to a platform where access to the application has already been granted                The user opens the application and selects the desired query (e.g. industry comparison)                The user filters the data on their industry (e.g. financial services), country (e.g. UK) and the type of role (e.g. graduate)                                    The user can preview the benchmark at any time                                                The user then selects the project(s) in an on-demand database where the data they want to benchmark against resides (e.g. Grad 2010 and Grad 2011)                                    If these projects lack any of the essential firmographics data (e.g. industry, type of project or job level) the user is asked to enter this data to improve the benchmark exercise. This data may then be stored in a benchmark database going forward.                                                The user can view their data in the application, both compared to the same data of last year (2 projects), and compared to the general benchmarking data from the benchmark database (e.g. UK financial services organisations who use the numeric reasoning test).                                    The user can view average numeric percentile scores, high/low scores and see their own data compared with the benchmark in a graphical format on the screen                    The user can change the view of data from e.g. monthly values to different score types                    The user can filter further to view assessment results from only a sub set of test takers, e.g. male applicants or people under 20 years of age.                                                                                B. A VP of HR want to look at trends on competency score values across management teams globally and see how their senior managers compare against the management team in other organisations of a similar size and area of business.                    The user logs on to a platform where access to the application has already been granted            The user opens the application and selects the desired query (e.g. competency comparison)            The user filters the data on job level/type of role (e.g. senior managers), country (e.g. global/all) and period (e.g. 2010)                            The user can preview the benchmark at any time to make sure it displays what they are expecting and to look at general trends                                    The user then selects the project in an on-demand database where the data they want to benchmark against resides; in this example the user undertook a specific project to assess their management team last October (e.g. management October 2010)                            In this example all essential firmographics data were entered when the project was created.                                    The user can now view their data in the application compared to the general benchmarking data from the benchmark database (global organisations that used the same test for competency assessments last year).                            The user can view average competency scores on a 5 or 10 point scale, high/low scores and see their own data compared with the benchmark in a graphical format on the screen                The user can change the view of data from e.g. monthly values to different score types                The user can filter further to view assessment results from only a sub set of test takers, e.g. only applicants in companies with more than 500 employees.                                                C. The leadership team in an organisation want to see their staffs overall results or competency profiles for a specific role and compare it to a group of best of breed companies in their market.                    The user logs on to a platform where access to the application has already been granted            The user opens the application and selects the desired query (e.g. job (level) comparison)            The user filters the data on job level/type of role (e.g. sales staff), country (e.g. US) and industry (e.g. retail)                            The user can preview the benchmark at any time to make sure it displays what they are expecting and to look at general trends                                    The user then selects the project in an on-demand database where the data they want to benchmark against resides; in this example they use assessment data from the last 3 years from both their recruitment and development assessment projects.                            They update firmographics if essential information is missing                They filter on test takers who are flagged as “employees”                                    The user can now view their data in the application compared to the general benchmarking data from the benchmark database (US retail organisations that used the assessments to evaluate staff or new recruits in sales).                            The user can view both competency scores and ability scores and see their own data compared with the benchmark in a graphical format on the screen                The user can change the view of data from e.g. monthly values to different score types                The user can filter further to view assessment results from only a sub set of test takers, e.g. entry level sales roles or sales team leads.High Level Requirements                                                The following summarises key features and functions. This is not an all inclusive list.                    Creation of a high performance database to store assessment data copied/replicated from other platforms.                            It is yet to be determined whether to leverage an existing database, create a specific one for the application, or create a wider data warehouse.                The database will require data from a large number of data sources such as on demand assessment and score platforms, test taker demographics bio data, project firmographics information, client information and industry codes.                The database is stored and indexed to allow for high performance queries and data views.                The database should allow for the assessment data to be categorised by multiple attributes. These attributes will be used to enable search, query and filter functionality in the analytics/user interface. (Initially we can use a number of pre defied data sets (canned views) with parameters that can be varied rather than a fully scoped data base.)                The data will be stored both in the original assessment results format but also in calculated formats to allow for models such as competencies to be used, even where these were not use in the original client project                                    Provide the ability for internal and external users to access/query the single source database through a web interface.                            An internal user is defined as a user within a pre-defined network                An external user is defined as an approved user                                    Provide a graphical interface for the user to select the data they want to use and the actions they want to take to do a comparison/benchmarking exercise using the data.                            Search the database for products, industries, dates of assessment events etc. to find a benchmark/data set they want to view and use.                Search their organisation's assessment database on the platform and select the projects, jobs, assessment types, dates etc. which they want to use for the comparison. View their data compared with the benchmark/dataset they selected.                Filter and drilldown to see e.g. data for a specific market/country, date interval, specific biodata combinations, job or assessment product/score                It should be possible for the user to add classification/tags to this data where it is needed/missing (e.g. type of assessment) and for these to be added to the database for future use                A user is be able to save their selection and queries and re-use this when they return to the application                                    Create the ability for internal administrator users to administer and manage the database and the standard benchmark data sets. Administration of the database includes, but is not limited to, adding new data, modifying existing data, deleting data, adding data tags to data, creating new benchmark sets, designing new views.            Create a manual or (longer term) automated process for extracting, cleaning up, tagging and uploading data to the database from any active platform on a regular basis (2-4 times a year).Possible Future Enhancements                        The creation of real-time integrations between the database and any assessment platform.        The creation of integrations between the database and third party providers such as integrators who may currently have connections between an assessment platform and their system.        Any features related to the aboveOther Issues        1. Data extracts are costly and take time to get prioritised; a standardised way of regularly extracting all relevant assessment data may be used        2. The quality of the assessment data that goes into the benchmark database and benchmarks is important, therefore data cleaning, tagging and analysis resources are assigned to ensure data quality and support the ongoing data management process.        
FIG. 37 shows an example of a design overview with a single platform. From the assessment platform 1000 assessment data 1008 is passed to the master data warehouse 1012. The data editing application 1014 interfaces between the master data warehouse 1012 and the benchmark data warehouse 1016, and serves to clean and consolidate assessment data and industry benchmark information. The user can log into a platform 1006 for performing and controlling analytics. Via a client query application 1004 client specific live assessment data 1002 from the assessment platform 1000 may be accessed. Benchmark data 1010 from the benchmark data warehouse 1016 is accessed via the same client query application 1004.
FIG. 38 shows an example of a design overview with a multiple platforms. A multitude of platforms (including assessment platform 1000, analytics platform 1006, external platforms 1018, and other systems 1020) pass data to and access data from the master data warehouse 1012. Benchmark data from the benchmark data warehouse 1016 is accessed via a client query application 1004.
FIG. 39 shows an example of a design overview where the analytics application sits within a central system 1022 but sources its data primarily from external databases (e.g. a content metadata database 1028 and a benchmark and index measures database 1030). These databases are managed and populated via extract, transform, load (ETL) processes using assessment (score) data 1034, demographics (candidate, project) data 1036, and other sources to access.                Central-integrated pages 1024 represents the entities used for presentation of analytics data, the implementation of the charting components, integration changes to the registration process and other miscellaneous interactions        Analytics service 1026 represents the service implementation responsible for data access and transformation of raw data into the business model        Benchmark and index measures 1030 and content metadata 1028 are logically separate but may be physically together        
An include client marker 1038 may be passed between the central and the database(s). Demographic direct feedback 1040 may be passed between the different part of the central system.
FIG. 40 shows some possible interactions between different elements of the analytics system. Benchmark measures and metadata 1042 from a data warehouse 1050 are subject to an irregular ETL process 1044 to populate a benchmark measures and metadata database 1048. This benchmark measures and metadata database 1048 resides on an internal domain 1046 and may be linked to benchmark measures and metadata database 1058 on a customer database domain 1062 by multiprotocol label switching (MPLS) 1056 or other log shipping procedures. On the customer database domain 1062 reside a plurality of databases 1068 with client data, for example from client assessments, demographics, or other data. The data from these databases 1068 is accessible for daily ETL 1052, for example with open database connectivity (ODBC). During daily ETL 1052 candidate measures are calculated for clients that subscribe to analytics. The daily ETL 1052 deposits data in a client measures database 1054 that resides on an internal domain 1046. Data from the client measures database 1054 may be log shipped daily to a client measures database 1060 that resided on the customer database domain 1062. The analytics 1064 operates from the customer database domain 1062 with data from the client measures database 1054 and the benchmark measures and metadata database 1058. The analytics application 1064 aggregates candidates and benchmarks from the benchmark measures and metadata database 1058. The analytics application 1064 obtains client registration information, as well as information relating to saved projects and candidate metadata from a central database 1066. The analytics application 1064 may operated from the central database. The analytics application output is deposited in the central database 1066, which is included for daily ETL 1052.
User StoryNotesConstructAssessment database (data from a variety ofBenchmarkdifferent assessments). Other systems may follow.DatabaseA verify step may be used.Up to 5 years worth of data is included.Include record of data added.ImportAssessment database (data from a variety ofAssessmentdifferent assessments). Other systems may follow.MeasuresA verify step may be used.DatabaseUp to 5 years worth of data is included.Include record of data added.Only needed for clients registered for TA.
The benchmark measures and metadata database 1058 and client measures database 1060 on the customer database domain 1062 may be read-only copies of the benchmark measures and metadata database 1048 and client measures database 1054 on the internal domain 1046. In this case the analytics application 1064 uses the read-only copies 1058 1060. This minimises the risk of any communication latency in querying the data for individual reports. Central 1066 may have knowledge of the schema (i.e., the interface is the schema). A service may be implemented internally to central 1066. Some of the databases are shown as two databases, but may be put into a single database as for example the columns for the benchmarks and the client measures may be very similar, although they will have different lifecycles.
The measures for candidates on project belonging clients that are registered users reside in the measures database 1068. The analytics application 1064 in central 1066 aggregates data (for example: calculate the average for a measure for the set of candidates or projects selected for comparison with a benchmark) but does not do any calculation of measures. The closer central 1066 comes to a simple SQL Select to populate the graph component, the faster the UI may be. Further, central 1066 can then use the benchmark measures and metadata database 1058 and client measures database 1060 read-only.
A mechanism may be necessary to permit central 1066 to inform the daily ETL 1052 (warehouse ETL job) which clients have registered for the analytics application. The ETL needs to note on the projects in the client measure data, which ones can be used for Benchmark measures because the matching measure is available. This can also be used to reduce the volume of client data that is loaded into the client measures database, based on whether the project has measure data that can be used for any of the current benchmark measures. For example, the ETL may read the client list via ODBC similar to other source data.
Augmented metadata on projects and candidates may be stored in central 1066 to avoid the application becoming coupled to the assessments. A service to allow this to be written back can be implemented separately.
Central 1066 retrieves the project and candidate list in the client measures database 1060. This may need to be filtered to projects with data that can be used for the measures. Rules may be defined for hidden projects (such as projects that are not deleted). Data can be deleted from the assessment database, so ETL procedures and central need to cope with that.
Benchmarks may be biodata and demographic data specific, so the client measures feed may need to take this data from the demographics database and other databases.
Range-specific text for labelling benchmarks may be stored with the benchmark data. This means there is one master database for storing benchmark information (that may need to be reused outside the analytics application).
In the following, a new analytics component embedded in a central platform that makes benchmarks (in chart format) available to clients is described. The description includes:                Prototype screen shots, structure only (layout and style may vary)        Functional requirements (as User Stories)        Non-functional requirements        Entity model (conceptual view of database schema)Scope        Main                    Analytics functionality added to central platform            Construct benchmark database            User to construct and view a benchmark chart from a predefined set of benchmark templates.            User to drill down on a benchmark chart.            User Registration and payment            Operator verification of user requests and account activation            Operator management and deactivation of user accounts            Scheduled import of assessment measures database            User to filter benchmarks on selected data types and values            User to save and open user defined benchmark queries            User to print benchmark displayed on screen                        Preferable                    One off construction of assessment measures database            User to compare own project data            Placeholder for Users to update their own data (suitable for discussion during demo)            Users to drill down on selected benchmark data            Users to update their own data (with permanent or temporary save option)                        Optional                    Administrators to construct and update benchmark templates            Administrators to manage benchmark meta data            Administrators to manage benchmark page content and chart options            Administrators to manually update benchmark data            Administrators to export benchmark data in various formats            User to email a copy of the benchmark displayed on screen                        Further options                    Administrators to create non-standard benchmarks            Automatic validation of benchmark data            Synchronise analytics data changes (back to assessment measures database and other systems)Process Overview                        
The analytics system is based around the selection of three options:                A Theme—client interest, e.g. improving their recruitment process.        A Benchmark Model—a scale. The data to be enquired on. E.g. people risk        Primary Data Type—The comparison. E.g. industry sector.        
Each allowable combination of these options is recorded as a Benchmark Template.
Users create Benchmark Queries by selecting a Benchmark Template and optionally adding filters and chart format preferences. Benchmark Queries are then saved to the analytics database. Users may have the option of saving Benchmark Queries as Global (also referred to as ‘Universal’) Benchmark Queries (available to all users). Other users may only have the option of saving User Benchmark Queries (for their own use).
The analytics system will generate graphical representations of Benchmark Queries by linking them with their corresponding Benchmarks and Assessment Measures. These graphical representations can be displayed either externally or within the analytics application itself.
Note: For demo only one theme may be implemented, so will not be selectable. Also a single Global Benchmark Query will be created for each benchmark.
Storyboard/Screen Prototypes:
User Types
                Unauthorised User—has access to all Global Benchmarks; cannot save as personal user queries.        Authorised User without assessment access—as above but can save queries as personal user queries.        Authorised User with assessment access—as above but can add assessment project data.        Admin User (Administrator)—can update Measures and Chart Type; can save Global and hidden Benchmark Queries. May be done using SQL scripts initially.Render Chart        
A chart is rendered to represent the selected Benchmark and Data Type Values. The assigned chart type is used for a saved benchmark query. Data is retrieved based on the selected data type values. When multiple filters (data type values from different data types) are selected then use OR operator to select data in the same data type and AND operator between data types, e.g. (‘uk’ OR ‘france’) AND (‘finance’ OR ‘marketing’).
FIG. 41 shows various examples of render charts:
a) shows how for pie charts and simple bar charts, a one dimensional set of data values is provided, e.g. 6,5,2,4.
b) shows how for grouped and stacked bar charts, a two dimensional set of data values will be provided, e.g. (6,5,2), (8,4,3), (3,7,3).
If multiple measures are assigned to multiple groups (A, B, C) then measures are split accordingly. If one of the data types is set as primary then the data is split into corresponding groups. If no data source is set as primary, then only a one dimensional data set is used (for simple bar chart or pie chart).
For data:
Record NoMeasure1Measure2GeographyIndustry112UKFinance234UKMarketing356FranceFinance478FranceMarketingC), d) and e) show further examples of output render charts.c) shows Scenario 1Single measure (Measure1), and primary data type of geography, render data:Chart values: 4, 12 (sum of all Measure1 values split by UK and France).d) shows Scenario 2Two measure (Measure1 and Measure 2), and no primary data type, render data:Chart values: 16, 20 (sum of all Measure1 values and sum of all Measure2 values).e) shows Scenario 3Two measure (Measure1 and Measure 2), and primary data type Industry, render data:Chart values: (6, 10), (8,12) (sum for Measure1 and sum for Measure2 split by Finance and Marketing).
When projects are included, project data is filtered on all data type values selected except for primary data type (if enabled). Alternatively, the user may choose for the project data not to be filtered with the benchmark data. The user may be presented with a choice to apply the filters/drilldown to the project data or not. Content (html) is retrieved from the benchmark database. Content may potentially be configured within properties.
The title of the chart is derived from the selected Benchmark and Data Types Values. The title may be defined within properties, and it may be held with the benchmark data.
The Filter Summary (as illustrated in FIG. 105) is derived from the selected Benchmark and Data Types Values. Further logic may be added to this function.
On hover (over a data area) information relating to the following may be displayed:                Associated Content        Drill down options associated with the measure.Drill Down        
FIGS. 42 and 43 show examples of charts available via drill-down.
The functionality to filter and drill down on charts may be available to all SHL Central users (not just Premium Users).
When a drill down option is selected (for example using a link available on hover over a data section), then it is linked to the associated saved benchmark query and inherits the selection for the initial chart.
For example, for a bar chart displaying:                Primary Data Type: Geography (Global, UK and France)        Filter Data Type: Industry (Finance, Marketing)        Measure: UCF_1_1 and UCF_1_2(bar chart shown in FIG. 116)and where the Measures are linked to a pie chart for:        Primary Data Type: Industry        Filter Data Type: None (select all)        Measure: UCF_1_1, UCF_1_2 and UCF_2_1and Inherit filter from parent measures=True        
If the User clicks on the region corresponding to UCF_1_1 for UK in the bar chart shown in FIG. 116, then a new pie chart as shown in FIG. 117 is generated. The new chart is requested but with a filter inherited from the parent:
Geography: UK
Measure: UCF_1_1
If Inherit filter from parent measures is not true (as above) but false, then the (new) pie chart would show the sum of UCF_1_1, UCF_1_2 and UCF_2_1.
Further Comments
A carousel and Side Bar (of associated benchmarks) may be provided. Saved queries may be assigned to Sections. Saved queries may be assigned to Propositions. Some benchmarks may be highlighted (or featured). For administration purposes, Benchmarks may have Draft or Live status. A link to “Latest 10” benchmarks accessed may be shown. A data type value may be defined as corresponding to null (to retrieve other data).
Alternative Storyboard/Screen Prototypes
My Assessment Data
The ‘My Assessment Data’ tab provides access to the user's assessment data.
Functional Requirements
FIG. 44 shows functional requirements that relate to user registration for the analytics tool.
User StoryNotesInform Client of AnalyticsExisting functionality. No development.availabilityProcess requiredAs an Analytics user I want to beAdministrators, Sales, and Account Managers promoteinformed about AnalyticsAnalytics to clients.availability so that I can benefitAnalytics featured on Central platform Home Pagefrom the service.Marketing: Press Release, Trade Shows, website, etc.Register for AnalyticsFor Premium service (which allows users to compareAs an Analytics user I want totheir own Projects), or a free service will need toregister for the service so that I canlink user to a Client. Validation is required toinvestigate Analytics.prevent unauthorised users linking to clients.Register for Central platformExisting processAs a user I want to register for theUsers associated with more than one Soda Client mustCentral platform so that I canregister a separate account for each one.register for Analytics.Make PaymentMost customers will get the service free of charge.As a user I want to pay for theWhere users have to pay, an automatic system usingservice so that the account/serviceexisting Central platform credit card process shouldwill be activated.be used.May need a process to pre-approve specific Clientsfor free service.Include exception: Refunds.Check ClientOperator will allocate Client Type:As an Operator I want to check aAdmin Userusers request for the service soEmployee who will use the system to createthat I can restrict the service tostandard queries available to all users.approved users and link theirPremium Useraccount to a Client.High volume Client who will have option tocompare own projects against benchmarksEmployee using high volume Client account(on their behalf).Standard UserLow volume Client, Partner who only haveaccess to existing standard queries.Operator allocates Payment Type:Free ServiceAnnual paymentOperator rejects unwanted requests for access.Notify ClientInclude exceptions: Request rejected.As a User I want to be notifiedwhen my Central and Analyticsaccount is available so that I canstart using the service.Activate AccountShould use existing Central platform processAs an Operator I want to activateand functionalityaccounts once they have beenapproved so that I can restrictAnalytics to approved users,restrict free access to specificaccounts and prevent unauthorisedusers viewing clients' information.Select AnalyticsShould use existing Central processAs a User I want to select Analyticsand functionalityfrom my Central account so that Ican use the service.Log into Central platformExisting Central functionalityAs a User I want to log into Centralplatform so that I can access theAnalytics serviceSee Featured BenchmarksExisting Central functionalityAs a User I want to see featuredbenchmarks on my Analytics homepage so that I can efficientlymonitor the latest benchmarks.LogoutExisting Central functionalityAs a User I want to log out of myAnalytics account and Centralplatform so that I can protect mycompany's information fromunauthorised access.
FIG. 45 shows functional requirements that relate to analytics administration and services.
User StoryNotesConstruct Benchmark TemplateA Benchmark Template will define:As Administrator I want to create aNameBenchmark Template so that I canThe Theme: The reason the client is using themanage the options and featuresbenchmark, e.g. Improving the recruitmentavailable to users querying theprocess. It will be the option selected by theBenchmark data.user for “I want to understand my . . . ”.Benchmark Model: the scale that thebenchmark will be based on. It will be theoption selected by the user for “By lookingat . . . ”.Benchmark By: The data type that will becharted at the top level, e.g. ‘Industry’. It will bethe option selected by the user for“benchmarked by . . . ”.Any fixed filters. Data filters that will be hardwired into the benchmark template, e.g. ‘Onlyfor Finance and Marketing’.Allowable filters. Additional data filters that willbe optionally available to the user at run time,e.g. ‘Show for UK and France’.Allowable Drill Down. Drill down optionsavailable at run time. Only one level of drilldown will be supported. E.g. User will click onsingle bar (for bar chart) and have the optionof drilling down on ‘Geography’ or ‘Industry’Allow User Data Comparison. Can user databe added to the chart for comparison withGlobal Benchmarks.Benchmark templates may be created using SQLscripts initially.Manage ThemesAdd, Delete, Update, Deactivate.As an Administrator I want toMay be SQL scripts initially.manage Themes so that I can addnew meta data without the need forfurther system development.Manage Data TypesAdd, Delete, Update, Deactivate.As an Administrator I want toMay be SQL scripts initially.manage data types and their filtervalues so that I can add new metadata without the need for furthersystem development.Manage Benchmark ModelsAdd, Delete, Update, Deactivate.(Scales)May be SQL scripts initially.As an Administrator I want tomanage Benchmark Models andtheir band values so that I can addnew meta data without the need forfurther system development.Manage ContentContent will be:As an Administrator I want toPage titles, labels, text, pop-ups, and imagesmanage content so that I candisplayed on Analytics pages.control the information displayedDocuments and sites linked to Analytics pagesand available to usersAll supporting text for the interfaceSupporting documentation/white papers/factsheetsMay attach business outcome link or paperwhere these match closely enough or refer tothese in the support documentationContent will be conditionally displayed based oncurrent:Theme (e.g. Improve Recruitment process)Data Type (e.g. Marketing)Benchmark Model (e.g. People Risk)Benchmark Model Band (e.g. Very high riskpeople)For each of these categories, content will link to one,many, or all.Content may be hard coded.Manage Chart optionsChart Types will be:As an Administrator I want toBar Chart, Pie Chart, etc.manage the chart options so that IA variety of Chart tools are possible.can control the chart types used toChart Types available will be conditionally on:display different content to users.Theme (e.g. Improve Recruitment process)Data Type (e.g. Marketing)Benchmark Model (e.g. People Risk)Volumes (to be confirmed).For each of these categories, content will link to one,many, or all.Some bar charts will be designated as default charts.To be used when benchmark is first displayed. Therewill be no, fixed options initially.Construct non-standardWhere possible the system will be used to create andBenchmarksave common saved queries (Standard Benchmarks).As an Administrator I want toIf non-standard benchmarks are required, a newcreate non-standard Benchmarkprocess (possibly SQL script) may be introduced.Queries so that I can publishspecialised charts that cannot beconstructed using standard systemfunctionalityDeactivate user accountUsers are only entitled to access Benchmarks whileAs an Administrator I want tothey are employed by the benchmark provider or thedeactivate a user's account so thatclient. A process is required to identify users no longerI can block access to ex-authorised to view client data and deselect them.employees (provider and Client).Manually Update BenchmarkBoth Benchmarks database and AssessmentDatameasures databaseAs an Administrator I want toData may come from SPSS or Excel.manually update Benchmark DataInclude an option to delete data (may be all data for aso that I correct data errors thatspecific candidate or client).distort Benchmark charts.Include option to deactivate specific rows (remain ondatabase but not included in benchmarks) . . .Validate Benchmark DataA process is required to validate Benchmark DataAs an Administrator I want toagainst Assessment measures database and reportvalidate Benchmark data so that Idiscrepancies.can be confident that dataWhen candidate scores are common to bothdisplayed is correct.databases, data can be compared and differencesreported.Data on both systems can be automatically analysedand inconsistencies, outliers, and invalid vales will bereported.Synchronise TA Data ChangesIf client data is held on the Analytics database, andAs an Administrator I want toUsers make changes to this data, then there will needmerge any changes made by usersto be a process to merge this data back to allto their data in Analytics back toappropriate Assessment databases.the source systems so that I cankeep all databases up to date withthe latest highest quality data.
FIG. 46 shows functional requirements that relate to different users viewing the analytics.
User StoryNotesView Benchmark ChartFor a standard user (only able to access preAs a user I want to view pre-configuredconfigured benchmarks), this is the mainbenchmarks so that I can see existing chartsfunction, and presented on their Home page.without access to the full system or withoutNon-users of analytics may also have ahaving to construct new queries.access to specific charts embedded intoother web sitesPremium user (able to construct custombenchmarks) will use this process to view allexisting benchmarks they have access to,plus their current (draft/work in progress)benchmark.For existing saved queries, first Select SavedQueryThe draft benchmark (for Premium clientusers), is saved to the database assigned tothe user and with a null name.Charts never display information that couldidentify a single candidate or client (otherthan the owning client).When there are less than 10 scores in a datasection (bar), the exact score may not bedisplayed. For example, it may be treated as5 instead.Build PageThe build page process:Sub component of View Benchmark ChartRetrieves Chart XML (possibly also ContentXML) from the Saved Query entity for theSaved Query reference supplied.If XML not available (hasn't been cached)then retrieve chart data (and possibly pagecontent) for the specified Saved Query, thenconstruct the Chart XML (and possiblyContent XML) saving them to the SavedQuery entity.Construct page from Chart XML (andpossibly Content XML), for example usingChart controlChart Data can be displayed in a 2dimensional grid.abcx123y456z234FIG. 47 shows a 2D grid chart with the datain the above table.Horizontal labels (a, b, c) from BenchmarkModel Names (linked to Saved Query viaBenchmark template and Benchmark Model).Vertical Labels (x, y, z) from two sources (datatypes and projects):a) Filter Names associated with Saved Queryvia Selected Filter for each bar. But only forfilters linked to the same Data Type asrecorded on the Benchmark Template.b) Project Names associated with SavedQuery via Selected Project for each bar. Ifmore than one Project name is associatedwith a single bar (data field on Project table),then string names together (comma separated).Retrieving Data is described in Retrieve Databelow.Retrieving content is described under AddContent.Add ContentAdd chart content using Chart Template'sSub component of Build PageTheme, data Type, benchmark Model, andBenchmark Model Bands.Retrieve DataThe Analytics application in CentralSub component of Build Pageaggregates data (i.e., calculate the averagefor a measure for the set of candidates orprojects selected for comparison with abenchmark) but does not do any calculationof measures themselves.Data is retrieved from two sources, theBenchmark Database, and the AssessmentMeasures Database:In both cases the Scales to be retrieved aredetermined by the Benchmark Model (whichmap to a single Assessment scale tag) andthe Benchmark Bands (which map to specificscores).For both databases, data is retrieved forFilters linked to the Query (via SelectedFilters and Fixed Filter).When filters are of the same Data Type, usethe OR condition to link values. Whengroups of filters are of a different Data Type,use the AND condition to link values. E.g.(Marketing OR Finance) AND (UK orFrance).Only data from the Assessment measuresdatabase belonging to the client (associatedwith the current user) is available.Select Saved QueryThe application shows a list of availableAs a user I want to select an existing savedsaved queries.benchmark so that I can view or edit theFor standard users these will be commoninformation.queries only and the only action available willbe View Benchmark Chart.Premium Client users will also see their ownqueries (previously saved) and queries withgroup access saved by other employees ofthe same company.Premium Client users will have the option toEdit, Delete, and Deactivate (hide fromothers) their own queries, and copy allavailable queries.Construct Benchmark ChartCorresponds to the Build Benchmark on theAs a user I want to construct my own customMock Up screen designs.queries so that I can tune benchmarks to myOnly accessible by Premium Client users andneeds and compare my own data (projects,Admin users.clients) against industry benchmarks.The construction of a Benchmark Chartcorrelates to the creation of a draft SavedQuery on the Analytics database.If an existing benchmark is being edited, thena draft Saved Query will already exist. Thismay be because the user has selected Editon the saved templates tab, or because theuser is returning to an interrupted session.On entry, if a draft Saved Query exists (aSaved Query for the current user with Name =null), then render the page based on thisdraft.For a new benchmark, first Selectbenchmark Template. Once selected, savethe details to a draft Saved Query.Create the chart in the chart area (iframe),see Build QueryFor an existing saved Query:Allow user to update the benchmark, seeSelect benchmark Template. After anychange, regenerate the chart area, see BuildQuery.Allow user to filter benchmark data, seeFilter Data. After any change, regenerate thechart area, see Build Query.Allow user to add own projects, seeCompare Own Data. After any change,regenerate the chart area, see Build Query.Allow user to save the current benchmark,see Save QueryOnce saved, the user may continue to makefurther changes to the draft.Select Benchmark templateSelect:As a Premium Client user I want to select anA Theme (I want to understand my . . . )Benchmark Template so that I can view theA Benchmark model (By looking at . . . )information and refine the query.Data Type (Benchmarked by . . . )This selection uniquely identifies a singleBenchmark Template.As each option is selected, the set ofavailable Benchmark Templates is filtered,and any options (in other selections) nolonger available are inhibited.If only one option is available for a section,then inhibit all others.A clear option allows the user to clearselected data. If selection is cleared, thenclear the corresponding fields on the drafttemplate and clear the chart area.Inhibit action command button (to displaychart) until all options are selected (or asingle Benchmark template is selected).In some cases (where there are benchmarkvariants), clicking an option from everysection may not result in a singe benchmarkbeing selected. This may result in a fourthsection or pop-up selection to choose thevariant.Filter DataSelect one or more filters (data types) and forAs a Premium user I want to filter data usedeach, select one or more values to be addedin displayed benchmarks so that I can restrictto the chart for comparison against thebenchmarks to a smaller more relevant datauniversal benchmark, e.g. Select Industry >set.Finance and Geography > UK.Use a pop up to filter a subset of data.Options could include:Geography (Country)Year (date)IndustryBusiness FunctionThe same filter is applied to Benchmark andClient data.Selecting no values for a data typecorresponds to all data.An option to select all is available. Thisselects all current items (so data added witha different item is not included in results).When multiple values from multiple datatypes are selected, the OR operator is usedfor all items of the same data type, and theAND operator is used between data types.E.g. If Geography > France, Geography > UK,Industry > Finance, and Industry > Marketing,are selected the query is for:(France OR UK) AND (Finance ORMarketing).When filters belong to the same data Type asthat linked to the Benchmark Template, thenadditionally allow the user to assign the filterto a bar (1 to 3). This is used to assign thedata to a data set for comparison on thechart. E.g. by assigning UK to bar 1, andFrance to bar 2, the chart will show a graphof UK compared with France.Compare Own DataSelect one or more projects to be added toAs a Premium user I want to add my ownthe chart for comparison against thedata to displayed benchmarks charts so thatuniversal benchmark.I can compare my own company againstUse a pop up with search option to filter auniversal benchmarkssubset of projects.Allow multiple select.Selecting no projects results in all projectsbeing returned to the benchmark query.An option to select all is available. Thisselects all current projects (so future projectsadded will not be included in results).Only data from the assessment measuresdatabase belonging to the client (associatedwith the current user) is available.Additionally allow the user to assign projectsto a bar (1 to 3). This is used to assignprojects to a data set for comparison on thechart.Save QueryPrompt user for saved Query Name andAs a Premium user I want to save myoption to save as:Benchmarks Queries so that I can reuseCommon (available to all users)them again in the future, or share them withGroup (available to users linked toother people.same client)User (available to myself only)Only Administrators have the option to savecommon queries. This option is used todevelop new common (universal, global)queries.If Draft Original ID set, then default name andaccess option to original.Name and access option are mandatoryIf query already exists with same name (otherthan draft original) then prompt user to cancelor overwrite.If original query has changed since draftcreated (using update date on original andcreation date on draft, then warn user andprompt to cancel or overwrite.After save, update Creation date to currentdate on draft query.Save saves query structure only, not actualdata. If saved query is reused after data haschanged, display may be different.View Chart Outside Analytics SystemAlthough charts are normally be displayed inAs an Internet user I want to view specifican iframe within the Analytics Application,charts so that I can learn more about thethey are accessible to anyone with the URL.Analytics services.Security tokens are used to protect chartsfrom unauthorised access.May need an option to lock data displayed onBenchmarks displayed outside the system.Could lock XML.Drill DownWhen the first level chart is displayed, theAs a user I want to drill down on a displayedUser can click on any data section in thebenchmarks element so that I further explorechart to drill down into the correspondingthat section.data.If more than one data type is allowed for drilldown, then the user is offered the option tochoose the data type to be displayed.On drill down chart y axes may be % of total,x axes are selected data type for drill down.When project data section is selected for drilldown, the option to drill down on project isprovided.Drill down shows charts at a lower level ofgranularity, never raw data, and neverinformation that could identify a singlecandidate or client (other than the owningclient).ExportFormats:As an Admin user I want to exportHTML: Copy of Benchmark Displaybenchmarks in a variety of formats so that I(including chart) for inclusion in othercan include the information in other systems.web sites.XML: Data only. For manualvalidation.Excel: For further analysis of data.PrintCopyright and Terms of Use should limit this.As a user I want print benchmark charts sothat I can keep a hard copy for future use.EmailCopyright and Terms of Use should limit this.As a user I want email benchmark charts sothat I can share the information with otherpeople.Update Own dataFor Projects:As a Premium Client user I want to updateIndustrymy own data so that I can better compare itBusiness Functionagainst universal benchmarks.DemographicsFor CandidatesOffer madeOffer acceptedQuality of hire (duration?)May need a way to filter on data not alreadyupdated.Permanent UpdateA suitable update/synchronisation procedureAs a Premium Client user I want tomay be necessary.permanently save changes to my own dataso that I can reuse the improved data againin the future.Temporarily UpdateNo client sees another clients' data directly,As a Premium Client user I want tobut permanent improvements to client datatemporarily save changes to my own data somay be used to improve the quality of futurethat I can use the improved data now butuniversal benchmarks.prevent the provider or any of its other clientsbenefitting from my improved data.Entity Model
FIG. 48 shows the elements in the entity model.
FIG. 49 shows the elements broken down into sections.
Theme
Theme or basis for query, e.g. employer Brand, Recruitment Process . . . .
The selection for “I want to understand my . . . ”
AttributeDescriptionIDUnique system IDNameName.ActiveYes/NoBenchmark Model
Scale/Measure to be displayed, e.g. Motivation, People Risk . . . .
The selection for “by lookin at . . . ”
AttributeDescriptionIDUnique system IDNameName.VariantOptional name - But must be provided for duplicate Names.Namee.g. If there are two “People Risk” models, then one maybe for variant “v1” and the other for “v2”.Scale TagScale used for BenchmarkBenchmarkOptional SQL to be used to retrieve data from BenchmarkDB SQLDatabaseThis custom SQL statement can be used to retrievebenchmark data in a way not supported by the standardAnalytics database structure and process.If SQL is provided the query must return a table in thestandard structure corresponding to the benchmark ModelBands.AssessmentOptional SQL to be used to retrieve data from AssessmentMeasuresMeasures DatabaseDB SQLThis custom SQL statement can be used to retrievebenchmark data in a way not supported by the standardAnalytics database structure and process.If SQL is provided the query must return a table in thestandard structure corresponding to the benchmark ModelBands.ActiveYes/NoBenchmark Model Band
Scale Score. Label for X axes, e.g. Very High, High, A,B,C,D . . . .
Standard set of values. Maps 1:1 with bands on Benchmarks database and Assessment measures database.
AttributeDescriptionIDUnique system IDNameName.SequenceInteger. Determines the sequence of bands on the chart.E.g. I for ‘Low Risk’, 2 for ‘Medium Risk’, and 3 for ‘HighRisk’.Definition of links to Benchmarks and Assesment measures database scores may be necessary.Data Type
Data Type used for filtering and drill down, e.g. Geography, Industry . . . .
Selection for “benchmarked by . . . ”
AttributeDescriptionIDUnique system IDNameName. e.g. Industry.ActiveYes/NoHiddenYes/NoUsed to hide data type from users. Used when filter is onlyused by template a Fixed Filter (e.g. A specific Instrumentlike OPQ32R).Reference data for benchmark DB. To identify the relevant Benchmark data items (column)Reference data for Assessment measures database. To identify the relevant project Score (column)Filter
Filter option. e.g. France, Spain . . . .
AttributeDescriptionIDUnique system IDData Type IDLink to Data TypeNameName. e.g. France.Mapping for benchmark DB. Code used on Benchmark database.Mapping for Assessment measures database. Code used on Assessment measures database. May be the same as on Benchmark database (e.g. FR always used for France).Fixed Filter
A fixed filter for the Benchmark Template selected.
This restricts a Request to a specific data set. e.g. UK and Marketing.
AttributeDescriptionIDUnique system IDBenchmark TemplateLink to Benchmark TemplateIDFilter IDLink to FilterAllowable Filter
An allowable query for the Benchmark Template selected.
AttributeDescriptionIDUnique system IDBenchmark TemplateLink to Benchmark TemplateIDData Type IDLink to allowable Data TypeAllowable Drill Down
An allowable drill down for the Benchmark Template selected.
AttributeDescriptionIDUnique system IDBenchmark TemplateLink to Benchmark TemplateIDData Type IDLink to allowable Data TypeBenchmark Template
Allowable query combination.
AttributeDescriptionIDUnique system IDBenchmarkLink to Benchmark Model. May be null (corresponding toModel IDall scales)DataLink to Data Type.Type IDUsed for 1st level Data Type selection(“benchmarked by . . . ”)Filtering is always allowed on this Data type.Drill down is not allowed on this Data type.Theme IDLink to Theme. May be null (corresponding to all clientinterests).ActiveYes/NoUnderYes/NoConstructionAllow UserAllow user to include own data.DataSaved Query
FIG. 50 shows the elements in the entity model elements that relate to the ‘Saved Query’ section. Query constructed using the Analytics system and saved.
AttributeDescriptionIDUnique system IDNameUnique Name entered by user.Null for the draft query. The draft query will be used forwork in progress (the current query being updated in theBuild Benchmark tab).TokenRandom number (in range 1 to 1,000,000,000).Automatically created when the row is created.Used in Benchmark URLs to allow public access to a singlebenchmark chart while protecting other charts, e.g.http://.com/Benchmark.aspx?ID=56&TOKEN=3646984BenchmarkLink to allowable Benchmark Template.Template IDNormally mandatory but can be null for draft query.User IDLink to User who created the query (owning user)Access Level0 - Private (for user only)1 - Group (for all users linked to owning user's client2 - Common (visible/usable by everyone)Draft queries always have access set to 0 (private).DraftSet only for draft queries when an existing query is beingOriginal IDedited.Created dateSet when query is created, or when draft query is initialised.Updated dateSet when query is updatedChart xmlCache of chart xml (data for chart control).When a request is made to render a chart, use this data ifavailable. If not retrieve data from database using and savea copy in this field.This cache is cleared when related data is updated.Chart xmlDate chart xml is populateddateContent xmlCache of content xml (links, text, images, and pop upmessages displayed with benchmark charts).When a request is made to render a chart, use this data ifavailable. If not retrieve data from database using and savea copy in this field.This cache is cleared when related data is updated.Content xmlDate content xml is populateddateFeatureYes/NoHighlight Query Name when displayed on TA Home Page.May need an option to lock XMLSelected Filter
Filters associated with a saved query, e.g. Germany, France; Marketing and Finance.
For Filters of the same data Type use OR condition, e.g. Germany or France.
For Filters of different Data Types use AND condition, e.g. (Germany or France) and (Marketing or Finance).
A filter with the same data type as the owning Project Template can be assigned to a bar on the top level chart (to show comparisons between different data sets). For example, to show a comparison between Marketing and Finance, assign Marketing to bar 0 and Finance to bar 1.
AttributeDescriptionIDUnique system IDSaved Query IDLink to saved QueryFilter IDLink to FilterBarInteger 0, 1, 2Selected Project
Projects associated with a saved query.
AttributeDescriptionIDUnique system IDSaved Query IDLink to saved QueryFilter IDLink to FilterProposition Query
Propositions associated with a saved query.
Only for saved universal queries. Queries are grouped into Propositions, and users have the option to search for queries (benchmarks) for a specified proposition.
AttributeDescriptionIDUnique system IDSaved Query IDLink to saved QueryProposition IDLink to PropositionSection Query
Section associated with a saved query.
Only for saved universal queries. Queries are grouped into Sections, and universal Queries (benchmarks) displayed in Analytics are grouped into sections.
AttributeDescriptionIDUnique system IDSaved Query IDLink to saved QueryProposition IDLink to Proposition
FIG. 51 shows the elements in the entity model elements that relate to different databases.
Benchmark
This entity depends on the data structure. It is linked to Benchmark Model bands and Filters.
AttributeDescriptionActiveYes/NoFurther AttributesFurther Attributes are necessary depending on the datastructureClient
Central Client
AttributeDescriptionIDUnique system IDAnalytics AccessTrue/False. Default False.Source SystemSource system (Assessment measures source)Source Client IDLink to Client on source systemAnalytics ServiceStandard (Common Benchmarks only)Premium (Can generate new queries with owndata from Assessment measures database)Selection criteria (to be defined. to include data from a variety of measures sources).User
Central User
AttributeDescriptionAnalytics AccessTrue/False. Default False.Further AttributesFurther Attributes are necessary depending on the datastructureProject
AttributeDescriptionIDUnique system IDSource Project IDLink to project on source systemActiveYes/NoSelection criteria (may include Firmographics)Focus
AttributeDescriptionIDUnique system IDSource Project IDLink to project on source systemActiveYes/NoSelection criteria (may include Demographics).Must correspond with data held on Benchmark.May include:Offer made (Yes/No)Offer accepted (Yes/No)Employee Quality Measure - for instance length of serviceScore
AttributeDescriptionIDUnique system IDScores to be defined but expected to contain a scale tag and score value (band)
FIG. 52 shows the elements in the entity model elements that relate to content and chart.
Content Type
Type of content, e.g. Pop-u on band.
AttributeDescriptionIDUnique system IDNameName.Content
Information to be displayed for a band. May be limited to a specific Theme and/or Data Type, e.g. “Employees in this category prove to be 20% more effective”
AttributeDescriptionIDUnique system IDBenchmarkLink to Benchmark Model. May be nullModel ID(corresponding to all scales)BenchmarkLink to Benchmark Model Band Type. May be nullModel Band ID(corresponding to all bands)Data Type IDLink to Data Type. May be null(corresponding to all data types)Theme IDLink to Theme. May be null(corresponding to all client interests.ContentLink to Message TypeType IDContent DataContent to be displayed.Format Data
Above structure allows one or all benchmark models, bands, data types, and themes to be linked. If one contact row links to several parent entities then an intermediary table is required.
May need conditional content, e.g. when a value is greater than n then use “abc”.
May need an option to limit content to Assessment Measures or Benchmark data.
Chart Type
Type of chart (graph), e.g. Pie Chart 1, Bar Chart 3.
AttributeDescriptionIDUnique system IDNameName.Chart meta dataChart
Allowable chart type for specific query, e.g. Bar Chart 1 for Geography and Person Risk.
AttributeDescriptionIDUnique system IDBenchmarkLink to Benchmark Model. May be nullModel ID(corresponding to all scales)BenchmarkLink to Benchmark Model Band Type. May be nullModel Band ID(corresponding to all bands)Data Type IDLink to Data Type. May be null(corresponding to all data types)Theme IDLink to Theme. May be null(corresponding to all client interests.Chart Type IDLink to Chart TypeIs DefaultUse as default (Initial display)
Above structure allows one or parents to be linked. If one contact row links to several parent entities then an intermediary table is required.
Proposition
Queries are grouped into Propositions, and users have the option to search for queries (benchmarks) for a specified proposition.
AttributeDescriptionIDUnique system IDNameProposition NameSection
Queries are grouped into Sections, and universal Queries (benchmarks) displayed in Analytics are grouped into sections.
AttributeDescriptionIDUnique system IDNameSection NameNon Functional Requirements
Conform to the provider Central standard
Include:
                Archiving of data        Conform to Central web design styles        Conform to Central technical architectureFor example:        Render chart in 5 to 10 seconds        Display area 980 pixel within Central        Accessibility requirementsFurther Comments        a) Users may be blocked from selecting data sets of less than 10 rows. The system may block benchmark template selection when there are less than 10 scores in the results. Further action may be defined for when data is changed and the number of scores in a data set (query) drops to below 10. Whenever a bar (in a chart) related to less than 10 scores, a value of 5 may be used.        b) An option may be provided to clear all (start a new query).        c) Drafts may be cleared, for example periodically, or when a user logs out. Alternatively a user may always return to current draft.        d) The following may be returned from queries and saved in chart xml:                    Counts, Percent, averages, . . . .            One or all            The user may be provided with an option to select between measures in iframe. Alternatively, each template may relate to a single measurement type (more meaningful and controlled).                        e) A process may be defined to clear data (Benchmark DB and Assessment Measures DB) when over e.g. five years old.Design Considerations        
FIGS. 53 to 66 show a high-level view of the design considerations for the introduction of the Analytics application into the Central platform, including the overall approach, designs and constraints envisaged at the outset of the project.
The following abbreviations are used:                MIS Management Information Systems        SODA SHL On-Demand Architecture        ETL Extract Transform and Load        WCF Windows Communications FrameworkTechnology Constraints and Selection        
The implementation needs to fit with the overall Central framework in order to enable integration and ongoing code management. An example of a suitable framework is based on the following components:                Visual Studio 2010 using a .NET 4 framework target        .NET Charting component used for charting presentation        Enterprise Library 4.1 for caching        SQL Server 2005        SQL Server Service Broker for feedback updatesCode Management        
All code is maintained as a branch within a SubVersion installation, accessed via a https link.
Design Principles
The following principles will be applied based on balancing the long-term NFRs against rapid implementation of the project:
Service Layer
Integration with Analytics data is managed behind a WCF service layer. This allows the solution to meet the security NFRs relating to separation of security contexts and presentation.
In addition, this service is implemented as an IIS-Hosted service on an internal port, allowing for scalability of the deployment.
High-Level Design
Analytics Framework
FIG. 53 shows how Analytics sits within the Central system but sources its data primarily from external databases. These databases are managed by the MIS team and populated via ETL processes using SODA and other sources—the population of this data is relevant to the overall implementation architecture in that the ETL runs as a daily bulk process and contends with the Analytics services, requiring specific design approaches.                “Central-integrated pages” represent the entities used for presentation of Analytics data, the implementation of the charting components, integration with the registration process and other miscellaneous interactions        “Talent Analytics service” represent the WCF service implementation responsible for data access and transformation of raw data into the business model        “Benchmark and Index Measures” and “Content Metadata” are the data stores that will contain all information relating to Talent Analytics data output. These are logically separate but may be physically together        Index Measures are populated via a separate ETL process from various SODA data sources and on the basis of the client being registered within Central        Demographic feedback updates are sent from Central and merged into the Benchmark and Index MeasuresAnalytics Layers        
FIG. 54 shows the interaction between the Analytics layers (Central, Central Business Layer, WCF Service Layer and Business Layer) with the Analytics Data.
Exception Handling and Logging
Exception handling and logging is also provided.
Databases
Benchmark and Index Measures
FIG. 55 shows database tables for the Benchmark and index measures.                TalentIndex is the primary de-normalised source for charting data                    Benchmark data will be client against a pseudo-client id of −1            Data is queried dynamically according to the model definition                        Client and Project data is normalised based on the SODA data rules and is used for specific data queries outside of index statistics, e.g. getting a list of all projects        Dataset details which projects support which underlying data sets, e.g. OPQ                    Benchmark data is supported by pseudo-projects and datasets to keep this mapping consistentContent Metadata                        
FIG. 56 shows database tables for the Content Metadata.
This is intended to model the business entities defined in the requirements for a Benchmark model and its child entities, and covers various aspects                DataType and DataTypeValue model a generic lookup for standardised coded values as they appear in TalentIndex, e.g. DataType Id=1 may be column “Country” with supported values 1=“France”, 2=“United Kingdom”, etc.        BenchmarkModel is the primary table for driving on-screen behaviour, each with multiple Views that represent actual displayed charts        Translations will be held centrally in a loosely keyed mapping table—this will support translation lookups for a variety of entities, e.g. narratives, model names, languages, data types                    EntityKey will reflect the parent table for translations            EntityId will reflect the URN for that table            Example: EntityKey=“Narrative”, EntityId=3 will find the narrative translations for narrative id 3                        Measure represents the physical mapping to the index data for dynamic querying        Band and Series are optional tables used depending on the type of view being generatedFeedback Updates        
Users of the Analytics system have functionality to be able to provide updates to existing Index, Project and Client data. This data is required to feed back into the main Analytics database and is then used as appropriate for filters and other functionality as if provided by the original ETL process.
Updates can be applied at three levels                Client-level firmographics        Project-level firmographics        Candidate-level firmographics and hire-status fields        
In principle this is a simple case of updating the corresponding Client, Project and TalentIndex fields, but it is complicated by the competing ETL process for daily updates which will contend with and block access to resources.
To mitigate this all user-provided updates are managed asynchronously through the use of SQL Server Service Broker services. This frees up the user from any time consuming SQL calls while allowing for some level of retry/resilience for blocked database updates.
This involves:                Creation of broker services and underlying queues        Creation of an invocation stored procedure that will be called from the Talent Analytics service itself and be responsible for submitting the asynchronous request        Creation of a consuming stored procedure that will consume messages from the queue and perform the actual data updates        
FIG. 57 shows an overview of the Feedback Updates process.
One set is created for each of the three update areas, resulting in a ClientUpdateService, ProjectUpdateService and an IndexUpdateService.
Messages are structured as XML documents, and are created and consumed using a standard format as below
<?xml version=″1.0″ encoding=″utf-8″?><updates><data clientid=″123″ projectid=”987” datestamp=”201110311218”iteration=”1” ><update key=″AString1″ value=″AValue″/><update key=″AString2″ value=″AValue″/><update key=″AString3″ value=″AValue″/><update key=″AString4″ value=″AValue″/></data></updates>Where                Multiple <data> nodes may occur        Id is the project, client or index ix for the updates        Datestamp is the datetime as YYYYMMDDHHMM        Iteration is used to track message failure retries (see below)        Key/value pairs represent the updates and each consuming stored procedure is hard-coded to match up keys to fields        
Service Throughput
The number of consuming stored procedures instantiated is configurable during queue creation. Initially this will be set to 1 but no assumption should be made in the implementation over concurrent message processing.
Cascading Updates
Updates to project-level fields will be required to cascade down to the associated index records on the TalentIndex table, e.g. industry sector. This could cover any number of index records so the process needs to be reuse the Service Broker approach in order to mitigate blocking during large updates.
When a project update is performed the procedure will be required to                Update the Project record transactionally        Submit a message to the CandidateUpdateService        Within the consuming Candidate stored procedure                    1. Select the top X records for the project where LastDirectFirmographicsUpdate <“Datestamp” from message            2. Transactionally update found records with the fimographics data set set LastDirectFirmographicsUpdate to current datetime            3. If the update succeeds then resubmit the same message to the service                            Repeat until the number of records found in 1) is zero                                    4. If the update fails then                            Consume the active message                Submit a new message to the service but increment the Iteration value                Once the iteration value exceeds Y then the message is considered poisoned and is instead moved to the UpdateFailures dead message table for manual processing.Resource Contention                                                
The ETL process is expected to insert and/or update bulk data on a daily basis. As Central is a 24×7 site the risk of resource contention and blocking needs to addressing. In general this will be managed through specific transaction isolation levels against different SQL activities
FIG. 58 shows the ETL process in outline.                Extract/benchmark queries against the TalentIndex data will be Read Uncommitted (“Dirty read”)        Queries for benchmark metadata will be Repeatable Read        Update queries generated from consuming Service Broker messages will be Repeatable Read while updating the underlying tables        The isolation level for the ETL Process is outside of the scope of this document but is expected to be Read Committed or Serialisable        
Specific processes are also defined for feedback updates and dealt with in the section Feedback Updates
Talent Analytics Service
Service Contract
This section defines the structure and contents defining the Talent Analytics WCF service implementation.
Service Contract
FIG. 59 shows the Service Contract in overview.
Data Contract
FIGS. 60 and 61 show the Data Contracts in overview.
Sequence Diagrams
FIGS. 62 and 63 show some sequence diagrams, specifically those for the GetModel and GetProjects sequences. Central to these are the dynamic lookup against the model to find supporting projects—once the column is found a dynamic query is issued against that column for =1.
Caching
FIG. 64 shows the caching service in overview. Both talent index and metadata are cached by the service in order to improve performance and minimise SQL traffic.
A CacheHelper utility/extension class is implemented in order to encapsulate caching functionality, which in turn implements an Enterprise Library caching framework.
Three data types are identified and are cached separately to give flexibility to the design:                Metadata caching—This is cached as in-memory data and with a configured absolute expiry period (e.g. 24 hours)        Index caching—This is cached using a database backing store as the stored datasets are typically large and to some degree unknown. This will also help with to implement a shared cache across all service nodes by pointing to the same backing store        Client caching—Initially no client data is cached but the cache helper is put in place to enable it. The implementation will simply return a ‘not cached’ response        
Caching is implemented via the Enterprise Library caching framework to reuse built-in support for database backing stores and also to introduce some flexibility in the decision-making over how to cache entities. The SQL scripts for the creation of the backing store database are provided as part of the Enterprise Library source code package.
Configuration for the implementation will be as follows, noting the three separate cache manager entries for the three caching types and the connection string to the SQL backing store.
<configSections><section name=”cachingConfiguration”type=”Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35” requirePermission=”true” /></configSections><connectionStrings><add name=”TABenchmarkCacheBackingStore” connectionString=”IntegratedSecurity=SSPI;Persist Security Info=False;Initial Catalog=BackingStore;Data Source=.”providerName=”System.Data.SqlClient” /></connectionStrings><cachingConfiguration defaultCacheManager=”BackingStore”><cacheManagers><add name=”MetadataCache”type=”Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager,Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35”expirationPollFrequencyInSeconds=”60”maximumElementsInCacheBeforeScavenging=”1000”numberToRemoveWhenScavenging=”10” backingStoreName=”NullBackingStore”/><add name=”ClientCache”type=”Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager,Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35”expirationPollFrequencyInSeconds=”60”maximumElementsInCacheBeforeScavenging=”1000”numberToRemoveWhenScavenging=”10” backingStoreName=”NullBackingStore”/><add name=”BenchmarkCache”type=”Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager,Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral,PublicKeyToken=31bf3856ad364e35”expirationPollFrequencyInSeconds=”60”maximumElementsInCacheBeforeScavenging=”1000”numberToRemoveWhenScavenging=”10”backingStoreName=”BenchmarkCacheStorage” /> </cacheManagers><backingStores><add name=”BenchmarkCacheStorage”type=”Microsoft.Practices.EnterpriseLibrary.Caching.Database.DataBackingStore,Microsoft.Practices.EnterpriseLibrary.Caching.Database, Version=5.0.414.0,Culture=neutral, PublicKeyToken=31bf3856ad364e35”encryptionProviderName=””databaseInstanceName=”TABenchmarkCacheBackingStore”partitionName=”Benchmark” /><addtype=”Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0,Culture=neutral, PublicKeyToken=31bf3856ad364e35”name=”NullBackingStore” /></backingStores></cachingConfiguration>
Class Design
FIG. 65 shows an example of a suitable class design for the caching implementation.
An example of suitable implementation code is as follows:
using Microsoft.Practices.EnterpriseLibrary.Caching;using Microsoft.Practices.EnterpriseLibrary.Caching.Expirations;public enum CacheType { Metadata, Benchmark, Client }public static class CacheExtensions{private const string MetadataCacheName = ”MetadataCache”;private const string BenchmarkCacheName =”BenchmarkCache”;private const string ClientCacheName = ”ClientCache”;public static void AddToCache(this objectcacheItem, CacheType cacheType, string key){cacheItem.AddToCache(cacheType, key, 0);}public static void AddToCache(this object cacheItem,CacheType cacheType, string key, int absoluteExpiryInMinutes){string cacheName;switch (cacheType){case CacheType.Metadata:cacheName = MetadataCacheName;break;case CacheType.Benchmark:cacheName = BenchmarkCacheName;break;case CacheType.Client:cacheName = ClientCacheName;break;default:throw new ArgumentOutOfRangeException( );}var cacheManager =CacheFactory.GetCacheManager(cacheName);if (absoluteExpiryInMinutes != 0){var absoluteTime = newAbsoluteTime(TimeSpan.FromMinutes(absoluteExpiryInMinutes));cacheManager.Add(key, cacheItem,CacheItemPriority.Normal, null, absoluteTime);}else{cacheManager.Add(key, cacheItem);}}public static T GetFromCache<T>(CacheType cacheType,string key){string cacheName;switch (cacheType){case CacheType.Metadata:cacheName = MetadataCacheName;break;case CacheType.Benchmark:cacheName = BenchmarkCacheName;break;case CacheType.Client:cacheName = ClientCacheName;break;default:throw new ArgumentOutOfRangeException( );}var cacheManager =CacheFactory.GetCacheManager(cacheName);return (T)cacheManager.GetData(key);}}
Caching Keys and Expiry
Cached entities are keyed differently depending on their type.
In one example, metadata is held as one or more Benchstrength objects, and so the key is simply the URN of the benchstrength. Each cached entity is cached on a 24 hour basis, configurable at the service level.
Index data is held differently as the actual data being cached depends on usage and once cached will be held for the longer period of time.
Each ‘global’ query for benchmark data is cached for reuse by other users, and is called upon frequently as most charting users use these for comparison. This data is held speculatively for an open period of time in order to avoid rereading of the raw benchmark data.
The key for this type of data is built as a composite of the id values used for the initial query, in the format <data type id>|<filter id>;<filter value>
For example, if global data is queried with series by industry sector (data type id=1) and filtered by geography (id=3)=“Europe” (id=5), then the composite key would be “1|3=5;”. Where multiple filters are applied these would be appended accordingly.
If global data is queried for no series split (no data type id) then this would be keyed as “|3=5”
In some embodiments, no intelligence is used at this stage to cater for filters being applied in separate orders.
ETL Workflow
FIG. 66 shows an example of ETL workflow.
There are several important points to be observed:                Central maintains the list of allowed and enabled clients for Talent Analytics—This data needs to be querying as the driving dataset for the ETL process. Where new clients are added a full extract is required, otherwise a partial based on the last extract date for that client        Firmographic updates from SODA extracts are based on a checksum comparison against the existing Talent Analytics data. If the value differs then the ETL process will overwrite the data record                    SODA extracts for project data should take the maximum modified date where firmographic entries are kept in separate data records                        Firmographic updates directly from within Talent Analytics will check the effective date prior to update. If the timestamp on the update message is earlier than the effective date the update will be abortedFurther Features        
Alternative embodiments may comprise one or more of the following features, in any appropriate combination:                Test taker data updates and improvements                    Recruitment channels            Employee or not            When and why did they leave the recruitment process, self selected out, rejected at interview etc.            Adding filters to support the user cutting data by these classifications as well as the reason for assessment code on project level                        An efficient SODA project/candidate data cleaning process pre uploading client data to Talent Analytics                    A process for clean up of client project data where the project structure do not support TA e.g. merge projects, delete candidates            A process for adding information about test takers and projects in a bulk upload of data—e.g. industry, business function or test taker status information.                        Functionality and support for TA annual license model of pricing by talent pools        US English language version of the Application and essential parts of Central        New SODA instruments added to TA database and score calculations:                    OPQ32r data—Extend DB structure to cope with OPQ32r equated raw scores (may need changes to current OPQ BMs)            MQ data            DSI data            Verify Mechanical comprehension            Verify Checking            Verify Calculation                        Updates to existing SODA data sets with new annual data                    OPQ32            Verify Numeric            Verify Verbal            Verify Inductive                        Add S2P data to TA database and include in benchmarks                    Selected solutions where there are significant enough data sets to create benchmarksb.            New database fields for solutions scores            Daily automated data update process for S2P data mirroring the current solution from SODA                        Additional benchmarks:                    UCF combined benchmarks for OPQ and Verify scores (and optionally S2P solutions scores)            Additional Risk benchmarks            Additional OPQ (and optionally MQ) Sales Model benchmark            Additional Verify benchmarks for rest of test types in portfolio                            Mechanical                Checking                Calculation                                    Additional demo/dummy benchmarks for unauthorised users (e.g. cut down version of Competency BM)            MQ benchmarks            DSI benchmark            S2P solutions benchmark            Custom specific benchmark (e.g. customer's own competency models)                        Enhanced graphical displays to be used for additional benchmarks or to change current benchmarks working with bar charts, e.g. heat maps or geographical maps        Further and expanded export functions                    Export to PPT, Word, PDF                        Enhanced print function        Enhanced error/warning messages and progress visibility for usability improvements        Administration interface for when creating and maintaining data and the TA benchmarks                    Database updates and management of quarantined data            Switch on/off filter options as visible to end user (geography, industry, job level and business function)            Manage minimum data set value (currently 30 test takers) for benchmark displays                        Allow custom specific benchmarks to be added to TA and restricted to only that client's users        Advanced user model:                    where different users/level of users can access different sets of benchmarks (the current version only has one access level and all users can access all benchmarks)            Demo/non restricted access to demo benchmark in TA for unauthorised users                        Allow multiple SODA clients to be added to the same TN Central user (it is a 1 to 1 relationship today)        Allow users to set default values for their data in TA, e.g. main industry and reason for assessment.        Additional Clear filters and cancel functions        Further colour/selection combinations on benchmarks and project data        Function to allow user to restructure their groups of projects or candidates inside the application—to create and save their own talent pools        Link to help from TA query page        Allow for manual cleaned-up project uploads for some clients (quarterly?) rather than automatic ones        Allow for different project numbers eg. from 30 to 10        Assorted scalability and performance monitoring/enhancementsExamples of Benchmarks        
Examples of Benchmarks include:                Competency        Leadership potential        Ability (including Verbal, Numerical and Inductive Reasoning)        Behavioural RiskThe SHL Competency Benchmark        
The SHL Competency Benchmark enables organisations to obtain an overview of the talent they attract and employ, and to identify where they need to invest in talent acquisition, learning/development and succession management programmes.
The SHL Universal Competency Framework
This Competency Benchmark builds on SHL's Universal Competency Framework (UCF). The UCF is based on extensive worldwide scientific research, which examined hundreds of competency models across a wide range of different organisations around the world.
It sets out the key behaviours that drive performance using a standard, proven hierarchy that can be applied to virtually any job at any level in any organisation. It is structured on two main levels, from the ‘Great 8’ factors, to 20 dimensions linked to those factors.
The 8 factors consist of general categories of behaviours that influence performance across many different jobs, and the 20 dimensions provide a more detailed breakdown of each of the 8 factors, providing a further description of different types of behaviour that impact job performance.
FIG. 67 shows the Universal Competency Framework Great 8 and the benefits they drive.
For each of the Great 8 factors, you can drill down to the dimension level to explore the finer detail that these offer in answering talent questions. Specific views can also be created by mapping an organisation's own competency model to the UCF. Whether they are organisation-wide or role-specific competencies, mapping to the UCF can be overlaid onto Talent Analytics to provide a view of your people and your identified benchmark populations (eg. an external talent pool by industry and geography).
Creating the Competency Benchmark
In developing the Competency Benchmark, we looked at the distribution of scores across more than 1 million assessments conducted between 2006 and 2010 across 37 industry sectors, 30 countries, 31 business functions and 5 job levels. This database will continue to expand as more SHL assessment data is added.
The Benchmark has been calibrated globally, as taking a global view of talent reflects the dynamics of the economic and labour markets in which organisations now operate.
It also provides a flexible lens through which organisations can compare their people and processes to determine actions required to strengthen their talent management. By filtering benchmark populations by geography (several countries can be selected together), industry (several industries can be selected together), business function and job level, you can investigate any number of talent issues in the knowledge that the ‘bench strength’ views are consistent, reflecting real variations in talent across the populations you choose to benchmark against.
We have defined top talent as the upper quartile (top 25%) range of scores globally on the UCF Great 8 and 20 dimensions. The bench strength views provided by this Benchmark show the proportion of people who fall into the upper quartile range on the factors and dimensions—the higher the proportion, the greater the bench strength.
Case Study
A multinational technology company were undertaking a major change in approach to their markets, product development and engagement with their customer base. This meant a substantial shift in values and key behaviours that will drive achievement of the new business strategy.
They ran a number of assessment programmes and wanted to take a macro view of the data to get an overview of the talent they attract and employ, to identify where they had bench strength to succeed with the change, and where they needed to invest in talent acquisition, learning/development and succession management programmes.
Such an undertaking raised two challenges: how do we benchmark our talent and what do we know of the talent pool in our industry?
SHL Talent Analytics addressed both problems by organising the client's data in a form that presents a clear talent profile, and by giving them a view of what the bench strength of industry talent pool looks like.
FIG. 68 shows a talent profile, specifically the talent profile for the company against the technology sector for managers, professionals and graduates. Top talent is defined as those scoring in the top quartile (top 25%) on each of the Great 8 factors. For the industry, you can see the proportions of people qualifying as top talent in each factor, and the client can also see how they stack up against that profile.
Effectively, the Competency Benchmark is used to identify bench strength and areas to address in talent acquisition.
The global view of the technology industry shows bench strength in Leading & Deciding and in Creating & Conceptualising, but lack of bench strength in Enterprising & Performing. We can also see that the company outperformed the sector for Creating & Conceptualising and Organising & Executing, but underperforms the sector for Supporting & Cooperating and Interacting & Presenting.
So how could the company use this insight? A cornerstone of their change was to develop greater engagement with their customer base. To achieve that, a key element of their internal talent management programme was to foster greater engagement across their workforce, as well as reframe their reward and recognition around achievement, where Enterprising & Performing would be a critical driver.
The client saw where they had the appropriate talent and where their talent gaps were. Drilling down by line of business, job levels and geographies enabled them to understand where to invest in terms of targeted learning and development, as well as how to change their performance management processes.
This case study shows how the SHL benchmarks and benchmark populations help to identify how competitive an organisation is in acquiring talent, where variation exists in talent processes and how these insights help identify where to invest to strengthen talent management for an organisation. Talent Analytics can help to identify where potential lies in an organisation and what development needs to focus on to leverage that potential effectively.
While the project for this client was focused on talent acquisition, the analytics also point to where the strongest internal pools of talent are and where the development of analytical skills will deliver the greatest value to the organisation.
Other SHL Benchmarks Available
The SHL Competency Benchmark can be used alongside the SHL Leadership Benchmark to diagnose leadership bench strength and learning/development priorities, and how to strengthen succession planning.
The SHL Ability Benchmarks can also be used to provide detail on the bench strength of cognitive ability supporting specific areas of competency (eg. Interacting & Presenting, Analysing & Interpreting, and Creating & Conceptualising).
The SHL Leadership Potential Benchmark
One of the key issues in effective succession management is identifying clear development needs. In 2011, the Corporate Executive Board (CEB) found that only 43% of country and regional executives had confidence in their successors, while in Asia this dropped to 26%. The study also showed that only 1 in 4 employees had confidence in their employer having the leaders to succeed in the future
The SHL Leadership Benchmark builds on the SHL Leadership Model, and provides a benchmark of leadership potential. The Leadership Model takes into account transactional competencies (required to analyse, plan and execute tasks, projects and programmes) and transformational competencies (required to develop new insights and strategies, communicate those insights and strategies effectively to others, and to set out clear goals and motivate others to achieve them).
Creating the Leadership Potential Benchmark
In developing the Leadership Potential Benchmark, we looked at the distribution of scores across more than 1 million assessments conducted between 2006 and 2010 across 37 industry sectors, 30 countries, 31 business functions and 5 job levels. This database will continue to expand as more SHL assessment data is added.
The Benchmark has been calibrated globally, as taking a global view of talent reflects the dynamics of the economic and labour markets in which organisations now operate.
It also provides a flexible lens through which organisations can compare their people and processes to determine actions required to strengthen their talent management. By filtering benchmark populations by geography (several countries can be selected together), industry (several industries can be selected together), business function and job level, you can investigate any number of talent issues in the knowledge that the ‘bench strength’ views are consistent, reflecting real variations in talent across the populations you choose to benchmark against.
FIG. 69 shows the relationship between the SHL Leadership Potential Benchmark and the SHL Leadership Model.
Along the horizontal axis, we have the strength of people in terms of transactional competencies that drive the management of processes and delivery against targets. These are competencies that one would expect of operational managers, but are also key competencies underpinning effective performance as a corporate leader.
On the vertical axis, we have the strength of people in terms of transformational competencies that underpin the capacity to drive innovation and change in an organisation. These are competencies that one would expect of functional managers as well as technical specialists, but are also key competencies underpinning effective performance as a corporate leader in giving that leader the capacity to visualise new opportunities for their organisation as well as understanding the dynamics of successful change.
The Leadership Benchmark shows where populations are in terms of their trajectory to the top right of the model, and illustrates an overall competency profile underpinning performance in the transactional and transformational aspects of corporate leadership.
The Leadership Potential Benchmark Levels
Transactional and transformational aspects of effective leadership are summarised in the Leadership Potential Benchmark using a simple five level classification with the proportions against each level derived from the likelihood of having a rounded leadership profile. The levels of the Benchmark and their interpretation are summarised in the table below:
LevelDefinitionVery LowLess likely to have the strong and rounded competencyprofile required of an effective leader and more likelyto be effective in a well defined role with clearresponsibilities and expectations.LowMay have some strengths but also likely to havesignificant development needs across both transactional(managing processes and delivering against targets) andtransformational (driving innovation and change)competencies.ModerateDevelopment of several competencies required as shownby a capability to operate in either a transformationalor transactional role, but not both, or through amoderate profile across critical leadership competencies.HighA strong overall profile of transformational (settingstrategy and change agendas) and/or transactional(turning ideas and concepts into tangible actions andplans) competencies but also likely to requiredevelopment in specific areas to realise leadershippotential.Very HighVery likely to operate effectively in both thetransformational (setting strategy and change agendas)and transactional (turning ideas into tangible actionsand plans) aspects of effective leadership.
The Benchmark shows actual proportions of the global population of managers, professionals and specialists across industries that fall into each level of the Benchmark. This reflects the distribution of talent across both the transactional and transformational dimensions of leadership. It can be used in combination with the SHL Competency Benchmark to enable detailed drill-downs at all levels of leadership potential to identify where key development gaps are.
Case Study
A major utility company was reviewing its leadership talent in line with best practice for regular talent reviews. The company had two questions: how do my people compare to the utility industry in the UK and how do they compare to senior managers and executives in the UK?
They wanted an external view of their people to remove subjectivity in decision making. and a clearer sense of how strong their pipeline was in comparison with a) the talent pool for their industry and b) the talent pool for the level of position they were planning succession for—ie. understand whether to develop and promote, or hire external candidates to fill succession gaps.
FIG. 70 shows an analysis of leadership potential, specifically that the situation was good in terms of general bench strength of their people, with 73% of their candidates in the High or Very High bands of leadership potential—also in terms of how they compared to their industry sector geographically and the bench strength of senior managers in that geography. However, 27% of their candidates fell into the Very Low to Moderate bands.
Further analysis showed where bench strength was stronger by line of business and functional role Linking to the SHL Competency Benchmark identified key areas to target coaching and development actions in Supporting & Cooperating and Organising & Executing. This suggested a need to focus on how programmes and projects are organised, how standards for quality and customer service were set and followed up, and how some of the leadership cohort maintained positive engagement with their staff
Drilling into the data showed that the company had several competitive advantages in its leadership pipeline when compared to the industry and senior managers geographically. There was clear bench strength in Leading & Deciding, Interacting & Presenting, Creating & Conceptualising, Adapting & Coping as well as Enterprising & Performing. This macro view provided a framework to facilitate individually tailored feedback for progression to more senior roles, and greater focus and alignment of coaching and mentoring programmes.
FIG. 71 shows an analysis of Leadership potential by sector and geography.
This case study is an example of how the SHL Leadership Potential Benchmark has been deployed in a talent mobility and succession context. The Benchmark can also be applied in the context of talent acquisition to identify how effective talent attraction and acquisition processes are in supplying a strong feed into an organisation's leadership pipeline.
The Benchmark has been used by organisations to gain a proactive view on questions such as whether their graduate or college hire programmes are providing the calibre of employee who has the potential to staff future leadership positions, and whether their current cadre of middle and senior managers will provide the leadership they need to compete with other organisations as well as meet the needs of their organisations today and for the foreseeable future.
The SHL Ability Benchmarks
The abilities people have are talents that support the execution of tasks and the achievement of critical outcomes for organisations. Many organisations use ability tests to pre-screen and select people for positions where analytical skills and innovation are key requirements. The SHL Ability Benchmarks enable an organisation not only to identify the strength of ability they attract and employ, but also how effective and consistent their talent processes are.
The Ability Benchmarks use a generic classification of ability according to five levels. To maintain consistency with widely used classifications in testing and assessment, the levels associated with each benchmark represent the first decile (Level 1), the next 20% (Level 2), the middle 40% (Level 3), the next 20% (Level 4) and the upper decile (Level 5).
The five levels of the ability benchmarks are described below:
LevelDefinitionLevel 1More likely to be comfortable performing tasks wherethe requirement for this ability is low or where tasksrequiring this ability are undertaken with supervisionand supportLevel 2Likely to be able to perform tasks where lower levelsof this ability are required and where some supervisionand support is providedLevel 3Suggests a reasonable fit to tasks requiring thisability, but also the need for further developmentwhere higher levels of this ability are criticalLevel 4Suggests a good fit to tasks where higher levels ofthis ability are requiredLevel 5Suggests a strong fit to tasks where high levels ofthis ability are required
Creating the Ability Benchmarks
In developing these Benchmarks, we looked at the distribution of scores across more than 1 million assessments conducted between 2006 and 2010 across 37 industry sectors, 30 countries, 31 business functions and 5 job levels. This database will continue to expand as more SHL assessment data is added.
The Benchmarks have been calibrated globally, as taking a global view of talent reflects the dynamics of the economic and labour markets in which organisations now operate.
They also provide a flexible lens through which organisations can compare their people and processes to determine actions required to strengthen their talent management. By filtering benchmark populations by geography (several countries can be selected together), industry (several industries can be selected together), business function and job level, you can investigate any number of talent issues in the knowledge that the ‘bench strength’ views are consistent, reflecting real variations in talent across the populations you choose to benchmark against.
Ability Benchmark Levels
The levels of the benchmarks are generic and should be interpreted in the context of the specific ability that is benchmarked. The three most commonly used ability tests for graduates (college), managers and professionals are:                Verbal Reasoning ability and the potential to reason with written information to understand the key relationships in that information and the most logical conclusion to draw        Numerical Reasoning ability and the potential to work with numerical information in tabular and graphical form, identify the key relationships in that information and the most logical conclusion to draw        Inductive Reasoning ability and the potential to work with novel Information and, from first principles, work out the relationships in that information to be able to identify the next in a sequence of events        
Our research shows that Verbal Reasoning ability predicts effective communication as well as problem solving where text information is a critical source of information for evaluating issues and problems; Numerical Reasoning ability predicts problem solving where numerical data is critical to evaluating issues and problems; and Inductive Reasoning ability predicts the ability to develop solutions from first principles and innovation.
These abilities have also been mapped to the SHL Universal Competency Framework (UCF) and the links between the three example abilities described above and UCF behaviours are shown in the table below. For organisations using other assessments that can be mapped to the UCF, the ability benchmarks can be used alongside The SHL Competency Benchmark to gain a fuller understanding of the bench strength of an organisation's people and processes.
AbilityUCF Great B FactorBehaviorsVerbalInteracting &Builds positive relationshipsReasoningPresentingby communicating, networkingand influencing effectivelyAnalysing &Gets to the heart of complexInterpretingissues and problems throughclear analytical thinking andeffective application of expertiseNumericalAnalysing &Gets to the heart of complexReasoningInterpretingissues and problems through clearanalytical thinking and effectiveapplication of expertiseInductiveCreating &Applies innovation and creativityReasoningConceptualisingto develop new solutions in thecontext of the organisation'swider strategy
Case Study
An international bank with investment and retail arms wanted to answer the questions: how competitive are we in hiring graduates with strong cognitive ability?, and how consistent are we in the ability levels of those we hire across our geographies, job levels and lines of business?
Since this client operated globally, the global banking industry was chosen as the benchmark population.
FIG. 72 shows an analysis of ability, specifically Global Banking Client and overall performance against ability benchmarks for sector. You can see that overall they were outperforming the sector globally on both Verbal and Numerical ability benchmarks and so the answer to the first question was good news—they were doing well in competing for graduate talent.
With regard to the second question, analysis showed high consistency by geography and job level. However, when the analysis compared lines of business, there was variation and lower consistency.
FIG. 73 shows Global Banking Client and variations in bench strength by line of business (numerical reasoning ability benchmark). Two of their lines of business that exemplified this inconsistency are shown.
Line of Business B was substantially outperforming the sector while Line of Business A was not. This may reflect differences in the attractiveness of this client to potential employees across their lines of business, and may reflect inconsistencies in the processes and standards applied to them.
Either way, the analytics showed the client where to focus their efforts in lifting the effectiveness of their talent attraction and acquisition efforts across their business, as well as where to take deeper dives to address questions such as the competitiveness of packages and career opportunities.
This case study shows how SHL benchmarks and benchmarking populations help to identify how competitive an organisation is in acquiring talent, where variation exists in talent processes and how these insights help identify where to invest to strengthen talent management for an organisation.
Talent Analytics can help to identify where potential lies in an organisation and what development needs to focus on to leverage that potential effectively. While the project for this client was focused on talent acquisition, the analytics also point to where the strongest internal pools of talent are and where the development of analytical skills will deliver the greatest value to the organisation.
The SHL Behavioural Risk Benchmark
The behaviour of your employees may either strengthen the resilience of an organisation to negative events, or increase the likelihood of such events and the magnitude of their impact. We believe that risk is a natural part of organisational life and organizations need people with an appetite for risk if they are going to seize opportunities and move forward—as our model shows, one of the biggest risks for an organisation is to lose momentum and fail to act.
But, if risk is not measured and managed properly, the impact can be both internal and external. Reflecting on any high profile industrial accident of recent times, safety can clearly be put at risk by what people do, or fail to do (see the SHL white paper The DNA of safety and why accidents keep happening).
Safety is not the only risk that organisations face through the behaviour of people. The intangible reputation of an organisation is put at risk when it is seen to have failed in anticipating and managing events effectively. Often it is the failure of an organisation to understand and manage employee behaviour that causes the most lasting damage to its reputation.
“Of all the management tasks in the period leading up to the global recession, none was bungled more than the management of risk” Harvard Business Review—October 2009
The behaviour of employees can cause the risk of losing customers when poor customer service destroys customer loyalty. This with reduced product quality, increased production costs, employee absenteeism and turnover, may all be symptoms of dissatisfaction with the way decisions are made and communicated in an organisation. They will often reflect the way standards for quality as well as behaviour are promoted and reinforced by managers or supervisors. A lack of commitment among front line staff doesn't just happen by itself.
These are some of the reasons why we believe that the behaviour of people is what fundamentally drives risk in organisations. You may have conducted risk reviews and strengthened your policies and procedures, but ultimately it is what your people do and how effectively they are managed that will drive risk.
The SHL Behavioural Risk Model
We believe that a significant value-add from talent management is the contribution it can make to effective organisational risk management. The SHL Behavioural Risk Model brings that to life, and can help you contribute to how your organisation understands and mitigates risk, responding to these questions and challenges.
The Model has eight indices that enable you to look at the process impacts of behaviour, from the quality of decision making, to whether employees are likely to comply with procedures and policies—also at the people impacts of behaviour, from the quality of communication, to taking and promoting responsibility for actions, to effective teamwork and employee commitment. It tells you how the actions of your people position the organisation for risk, where it is more likely to be resilient to risk and where it is not.
FIG. 74 shows the relationship between appetite for risk and resilience to risk.
One of the biggest risks in any organization is the failure to act, so we have incorporated the Momentum to Act alongside Resilience to Risk at the top level of the model:                Appetite for Risk—the propensity of your people to make timely, perhaps tough decisions to seize the initiative, and to see actions through to achievement of a goal. The model recognises the need to take action where there may be risks, and that all organisations need people with an appetite for risk. It also recognises that organisations should be aware of their people's actions that create, rather than mitigate risk.        Resilience to Risk—whether the behaviour of your people mitigates risk through effective decision making, translating into clear standards for how those decisions are realised through the execution of programmes, projects and tasks. Is the quality of communication effective in setting the tone for behaviour in your organisation, by encouraging a shared sense of responsibility and collaboration?        
At this level, we can identify four states of organisational health in relation to risk. Organisations at the highest state have the appetite for risk in a way in which risks are more likely to be identified and addressed earlier—higher appetite for risk and higher resilience to risk. Those at the weakest state have low momentum to act and low resilience to risk, making both the positive impact of actions and their associated risks harder to foresee and prepare for.
Resilience to Risk incorporates six behaviours that can be looked at from two perspectives. SHL Talent Analytics enables you to drill down to the detail within these two perspectives, to help you understand the risk profile of your people from senior executives to frontline employees.
The Two Perspectives of Resilience to Risk
FIG. 75 a) shows the first perspective of resilience to risk, which focuses on conditions that promote effective execution of tasks to time, quality and cost, in line with internal/external policies and procedures. This is the process perspective and has three components:                1. Decision Quality—looks at the extent to which decision making is based on a clear commercial evaluation of data and evidence, and at the wider context of the organisation's capability to produce workable solutions. This index is drawn from the OPQ and is relevant to all levels involved in decisions that frame the direction and tasking of front line employees.        2. Following Through—looks at the likelihood that those decisions will, through the know-how of your employees, translate into tangible, customer focused plans of action that will deliver to time, cost and quality. This index is also taken from the OPQ and is particularly relevant for middle managers, supervisors and team leaders and their capacity to leverage expertise to organise resources and execute effectively.        
FIG. 75 b) shows the second perspective of resilience to risk, which focuses on conditions that promote a shared sense of responsibility, ethics, and openness and collaboration among employees. This is the people perspective of the model and has three components:                1. Communication Quality—looks at the how clear and effective the communication of decisions is in promoting organisational goals and achieving buy-in. This index is drawn from the OPQ and is relevant to all levels involved in decisions that frame the direction and tasking of front line employees.        2. Setting the Tone—looks at whether buy-in will be reinforced by the behaviours of managers and team leaders, so that a shared sense of ethics, culture of collaboration and mutual responsibility is more likely. This index is also drawn from the OPQ.Indices        
Benchmarks may also be used to determine an “index”, for example to describe and rate quantities such as “People Capital”, “Talent Pipeline” and “Risk”.
People Capital Index
The phrase 1+1=3 is a familiar one for describing the additional value from harnessing the resources in an organisation. We all know from over 30 years of research and client projects that the multiplier of having strong talent has an even greater impact on organisational success.
Our model captures this multiplier through the concept of 2 into 4 and by including the capacity to execute and engage with others effectively. It allows you to take an objective view across your talent acquisition and talent management activities to see whether all of those activities are building an effect talent base to meet your organisation's needs. The index can be applied at any point in your talent processes from the acquisition of new employees through to succession planning and high potential programmes.
Our scientific research shows that effective execution relies on two key talents:                Thinking Agility goes beyond intellectual ability to look at the capacity to handle different levels of complexity, understand problems and issues, and construct effective solutions        Capacity to Achieve provides insight into the energies you can call upon from your people and, importantly, how effective they will be in channelling those energies into effective projects and programmes that will deliver quality outcomes        
You probably run engagement surveys and they will offer you value in terms of perceptions of your organisation and your managers. But, will they tell you how effective they can be in building relationships inside your organisation and outside with your customers and external stakeholders? The People Capital Index gives you the answer to that question by looking at two key talents for effective engagement:                Interpersonal Agility provides insight into the capacities of your people to operate across a range of interpersonal contexts to build strong and positive relationships, and to influence and bring others with them        Capacity for Change reflects a simple fact of today's working world—it is constantly changing. This aspect of our people capital model lets you see the capacities you can call upon to overcome obstacles and persevere in the attainment of organisational goals, and to embrace and support others through change        
You will note that our model captures both the capacities of people (their energies and their scope) and their agility in using their talents to drive success and leverage change as an opportunity.
You can drill down to more detailed information below each of the four people capital talents to understand in detail the behavioural strengths your organisation is building to meet the challenges of today's world.
The Talent Pipeline Index
The world of talent management agrees on at least one thing: a healthy talent pipeline is essential to organisational success. But, do you have to wait months or even years to capture data on how well your talent pipeline is delivering? The Talent Pipeline Index gives you the proactive capability to look at your pipeline and identify the actions you can take at all points in the management of that pipeline, from talent acquisition to learning and development programmes to succession planning, and it offers this insight across your business functions.
You may already have internal metrics that give you a sense of how healthy your pipelines are, but do those metrics tell you how you compare to the organisations with whom you are competing? That's the capability the Talent Pipeline Index gives you so you can anticipate the actions you will need to take and access the data you need to build your business case for those actions.
So, what will the Talent Pipeline Index tell you? Based on our scientific research and the wider literature on the talents required to achieve career success in a senior role, the index will tell you the proportions you attract, acquire and manage against six levels benchmarked globally:                Contributor or those people who may have the talents to add value in an operational and transactional role, but are unlikely to have what it takes to be successful in more senior roles        Specialist or those people with the talent to be effective in technical and creative roles, and who are likely to find operational management a challenge        Operational Manager or those whose talents indicate that they will excel in the day-to-day management of operations, projects and tasks        Middle Manager or those who, in addition to the talents required for day-to-day operations, are also likely to offer talents in communication and engaging staff        Senior Manager or those who have the talent to prove themselves in the execution of operational and transactional tasks, and the talent to operate as a functional manager and manager of managers        Executive or those with the talent to be successful in transformational roles within organisations, bringing fresh perspectives and new realities combined with the talent to influence and bring others with them        
If you want to know the talent pipeline that your graduate, manager and professional recruitment programmes are building for the future of your organisation, if you want to anticipate the learning and development investments that will leverage your pipeline most effectively, and where hidden high potential is sitting in your organisation, then the Talent Pipeline Index will guide you towards the answers to those questions.
The Risk Index
Harvard Business Review had this to say in the introduction to its special edition on risk: “Of all the management tasks that were bungled in the period leading up to the global recession, none was bungled more than the management of risk.” [Harvard Business Review (2009). Spotlight on risk. October.]
How complete is your organisation's architecture for managing risk? If it doesn't embrace the behaviours of your people that are the real source of risk to your organisation, then it isn't complete. You may have conducted risk reviews and strengthened your policies and procedures, but, ultimately, it is what your people do and how effectively they are managed (what your managers do) that will drive the risks to your organisation.
How strong is the contribution of your talent management processes to the organisation's risk management architecture? Can your talent managers contribute easily and proactively to strengthening the organisation's risk mitigation? If the answers to the first question is that the contribution is weak, and the answer to the second question is no, well you can take some solace from the fact that you are not alone.
The Risk Index gives you the capability to strengthen your risk management by giving you a direct read of the behavioural risks in your organisation, and enables those in talent management to contribute to risk mitigation by quickly and easily identifying behavioural risk from talent acquisition and deployment of staff through to training and development needs throughout an organisation.
Based on research across a wide range of industries from the financial sector to oil and gas, you can look at levels of behavioural risk across your organisation and drill down to identify those risks associated with the day-to-day interactions between people (what we call People Risks), and at those risks associated with following processes, compliance with procedures and ensuring that standards and quality are maintained (what we call Process Risks).
Want to know where your people are more likely to communicate effectively, build a positive team atmosphere, plan ahead and focus on maintaining standards, are committed to the organisation and uphold its values, then the Risk Index will give you the answer.
It will also tell you whether your recruitment and development processes are being effective in screening for and managing those people who are more likely not to listen and fail to communicate, work against company values, create a negative atmosphere, and not commit to standards and procedures, all behaviours captured by the Risk Index.
The behaviour of people is what fundamentally drives risk in organisations. We believe that a value add from talent management is the contribution it can make to effective organisational risk management. The Risk Index makes that contribution a reality.                Our system provides you with evidence-based and scientifically researched guidance to help you identify the effectiveness of your talent programs. You gain instant access to the largest global database of talent data and insights to benchmark your workforce performance and make more informed decisions that can impact organizational effectiveness, productivity and ultimately, competitive positioning.        With our system you can drill down to your specific talent data and benchmark it by geography, industry and business function—simply and easily. You'll uncover key insights about the talent you attract, their performance, as well as their management and leadership potential. And with these insights, you can make better decisions about your talent programs—faster and with greater certainty.        Our system helps you improve the effectiveness of how you plan and execute your talent programs and enables you to accurately measure the bottom line impact of your investment decisions.        
Advantages:                Gain evidence-based insight for more informed decisions.        Identify, prioritize and measure talent investments and programs and align them to strategic organizational goals.        Drive focused, systematic change faster more efficiently and with higher value outcomes.        
FIG. 76 shows an example of risk index by industry sector.
Benchmark: Quality of Hire/Overall Risk in Talent Pool
Query statement: I want to benchmark the people risk of the people I attract by talent pool in my industry
Required Instrument(s): suitable test(s) that can provide metrics data, for instance SHL's OPQ32i or OPQ32r.
System
                Provide ‘Overall Risk’ benchmark                    Need fast accurate data extracts to support analysis                        Upload new benchmark data to benchmark database (DB)        Deliver all supporting text for the interface and supporting documentation/white papers/fact sheets                    May also attach business outcome link or paper where these match closely enough or refer to these in the support documentation                        Test database and benchmark results in benchmark DB        Equations for risk bands built in to application client data calculation        Enable the query and any support data needed on client data to match benchmark        Create and publish draft new benchmark on the platform                    Add all relevant tags such as obligatory instrument, optional instrument, best before date, specify minimum number of cases/data that can be displayed, specify drilldown/sub query options, specify necessary information form client data.            Attach all support documentation to benchmark                        Test benchmark in analysis tool        Publish benchmark on platform (ready for client to start using)End User        Log on (assuming access has been enabled)                    Open benchmark library . . . .            . . . Or use the “I want . . . ” combination of query they want to look for benchmarks on                            1. Preview a couple of benchmarks                                    1. Read summary info                    2. Open and read “fact sheet”—print/save                                                2. Select the desired benchmark (only one)                                    Add user data            Select projects            If classification data is missing:                            1. Add industry information to projects                2. Add information regarding the work function (use demographics)                2. Information regarding candidates                                    1. Add information regarding candidates related to:                     1. Who did you make an offer to                     2. Who accepted                     3. Who is still in your organisation (who left)                    2. Save updated data for future use                                                                    View benchmark with my data                            1. Hover over relevant areas of the graph for info on benchmark and “so what” statements related to the data                2. Filter and drilldown                                    1. By geography (e.g. now I want to benchmark the people risk of the people I attract by talent pool in my geography)                    2. By years                    3. By project (if more than one)                    4. By business function (demographics classification)                                                                    Save benchmark and my data for future use            Print or export graph (not DATA)            Look for next benchmark or close the applicationProposed Layout of Initial Benchmark Display:                        Benchmark only        Benchmark and client data        Hover-over text for benchmark scenarios in graph        Factsheet/white paper supplied as support material for the benchmark        Text description for the benchmark libraryEquations to be Used in Data Comparison Preparation/Calculation        A. Create Z scores for Universal Competency Framework (UCF) Great8: 2, 6 and 7                    Z score calculation            1. Calculate mean of candidate score distribution.            2. Calculate standard deviation of candidate score distribution.            3. For each candidate subtract the mean from their score and divide this by the standard deviation.                        B. Add these together to create a new variable called overall risk index        C. Apply the following cut-offs to the risk index variable to create a new variable called risk bands                    a. Lowest thru −1.98095746718274=band 1)            b. (−1.98095746718275 thru −1.3043517397815=band 2)            c. (−1.3043517397816 thru 0.823981688831731=band 3)            d. (0.823981688831732 thru 1.99568946042002=band 4)            e. (1.99568946042003 thru Highest=band 5)                        
FIG. 77 shows an example of risk banding.
Further Features
FIGS. 78 to 96 show various further features of the analytics system; these features may be present either singly or in combination:
Library
FIG. 78 shows an example of a library page with saved queries. A library may be available to all central platform Users. Page name and title, labels, format, style and content may vary. Design (cascading style sheets (CCS) and images) may be added. Public Benchmark Queries may be created by script (for example if the Build Benchmark page does not support Admin functionality). User Benchmark Queries are created when an Authorised User saves an open query. Content for the library page and links (probably PDFs) may be provided. This may be hard coded into the library page. Administrators may additionally see Hidden Projects.
Roller/Drop-Down Select Menu Functionality
This would provide an alternative way to select a benchmark and set a default primary data type. The selection is linked to a saved query (for example with a cross ref table). Further options include management of content via a content management system (CMS), and grouping of Benchmark Queries into Propositions.
Process
User selects query: Build Benchmark page opens for selected query.
User deletes saved query: User prompted “Benchmark Query will be deleted. Click Continue to complete deletion, otherwise Cancel”. System deletes or abandons deletion as appropriate.
Build Benchmark
FIG. 79 shows an example of a Build Benchmark page with a selected query. A Build Benchmark page may be available to all Users. This page may only be available by opening an existing Benchmark Query. Initially all data type selection (primary data type, filters, etc.) is be for the selected query. Data Types may have icons instead of names. The ‘Update’ function may only available to authorised users and administrators.
Data Types
On hover over, a data type selection section (see below) may open. Unless an update is made, the data type selection section closes when the cursor moves outside the section. A check box (alternatively a radio button) is used to select the primary data source (‘benchmarked by’). The style of the display may very. The selection section may be positioned off the menu bar into the main section, and may normally be hidden. The menu bar may be displayed in a different colour in the case of a primary data type.
Primary data source is optional. If used, then only one data type can be selected as primary. If user selects (ticks) when another data type is set to primary, then deselect on original and set on new data type.
Projects
Projects may only be displayed for authorised users with assessment data access. Hover and display section as above. See below for details.
Measures
Measures may only be displayed for Admin users. Measures are used to select the measures to be used for the chart. See below for details.
Properties
Properties may only be displayed for Admin users. Properties are used to define for example:                Chart Type (bar radio, etc.)        Chart Content        Drill Down links        Other propertiesSee below for details.Chart Area        
The chart area renders a chart to represent Benchmark and Data Types Values selected. See below for details.
Update
The ‘update’ function controls update of the display. A pop-up may provide the option to rename a user query. This function does not allow users to update public or hidden benchmarks. User queries may optionally be saved to an existing group, and a new group created.
Close
The ‘close’ function closes a Benchmark without saving changes and return to the Library page. If changes have been made, then the user may be warned, for example with a notification and buttons such as “changes will be lost, Continue or Cancel”.
Select Primary Filter
FIG. 80 shows an example illustrating a page for selection of a primary filter. Hovering a cursor over a ‘Data Type Name’ area opens a data selection section. Moving the cursor out of the section before making a change closes the section. Clicking on a primary tick box or changing any of the options on the page may fix (or pin) the section, and activate ‘OK’ and ‘Cancel’ options (e.g. buttons). Moving the cursor off the section then no longer closes the section. Optionally, moving to another data type menu (e.g. a tab) may open that section (overwriting the current section); all data changes are retained and ‘OK’ and ‘Save’ buttons remain activated.
From a drop-down selection menu, selecting a checked option selects the corresponding data type value. Assigning the same colour option to data type values adds them to the same bar in the chart. Colours correspond to keys used in the chart. Colours are displayed in a predefined order. Corresponding bars on charts are displayed in the same order. Assigning different colour option to data type values assigns them to separate bars in the chart. A limited number of colours are available.
If the primary data type is changed then any filter options selected for the original primary data type are retained (but unless primary data type, bar colour has no effect on chart rendered).
On selection of the ‘OK’ option, changes are committed, the data type selection section is closed and the chart is rendered. An appropriate icon or message (e.g. “Creating chart”) may be displayed while chart is being generated.
On selection of the ‘Cancel’ option, the data type selection section is closed (exposing the current chart) without saving any changes made.
Select (Non-Primary) Filter
FIG. 81 shows an example illustrating a page for selection of a non-primary filter. The hover, display, and fix (pin) behaviour, and the ‘OK’ and ‘Cancel’ function are same as for Primary Data Type (as described above). Selecting (e.g. ticking) a data type value includes data in selection. Although colour selection is not displayed, all data type values are assigned to the default colour group (this only becomes relevant when the current data type is set to primary). Selecting a parent data type value (e.g. Global) includes all subordinate data type values (making selection of subordinates redundant).
When chart data is retrieved, only data corresponding to selected data type values is selected. In the example illustrated, only data with Job Level=(‘Senior Management’ or ‘Supervisor’) is included in the selection.
Select Projects
FIG. 82 shows an example illustrating a page for selecting projects or other clusters of user data. The hover, display, and fix (pin) behaviour, and the ‘OK’ and ‘Cancel’ function are same as for Primary Data Type (as described above). When this section opens only selected projects are displayed.
Searching on Project Name and Date Range may be performed. A failure message may be displayed if the start date is after the end date.
Only projects of a minimum size (e.g. with at least 10 complete assessments) may be returned. In one version, data type filters are included in the selection. For example, if geography=‘France’ is selected, then only French test takers/projects are included. In this case, the number of test takers in projects may vary.
Only projects that have been active within a pre-defined period, e.g. the last 5 years, may be displayed. In this case, a response is defined for when a project (e.g. 4½ years old) is selected and the benchmark query saved, and the benchmark query is re-opened after the defined period has lapsed (e.g. a year later).
If Search text is entered, only projects with the text contained within the project name are returned. If a Start date is entered, only projects with ‘last accessed’ date greater than or equal to start date are returned. If an End date is entered, only projects with ‘last accessed’ date less than or equal to end date are returned. Other dates that the ‘last accessed’ date may be used.
A drop down colour picker is used to select projects and assign a colour. The functionality and order is the same as for primary data type. If the [Select All] function is activated, then the corresponding colour is assigned to all projects retrieved for the search. If the [Clear All] function is activated, then all projects already assigned to the benchmark query are deselected. This de-selection may or may not be included in current search results.
When a user clicks on search table heading, the table is ordered on the corresponding column (alternating ascending and descending order).
Select Measures
FIG. 83 shows an example illustrating a page for selecting measures. This page may only be available to Administrators.
For each measure a group (A-Z) is assigned. Assigning multiple groups to a measure sums the corresponding scores and displays them as a single value (bar). The order of the groups in charts is dictated by group name (A-Z). Management of labels and drill-down links for groups occurs via the properties section (see below). Alternatively, data may be managed using for example a SQL Server.
Properties
FIG. 84 shows an example illustrating a page for defining properties.
For a Chart, the definition may include properties relating to:                Chart Type        Title        Content        Inherit filter from parent measures (when chart opened as a result of a drill down).        
For each measure assign:                Drop Down Links (Name & ID, may be more than one).        Titles        Content        
All or parts of this data may be held on the benchmark database.
Build Benchmark
This option is only available to Premium Client Users and Administrators.
FIG. 85 shows an example of a build benchmark page. The function ‘Go’ should be disabled unless all options are selected. An option is selected from each section. Only options with corresponding Benchmark Template rows are available for selection. So as a user selects options, other options (in other sections) may disappear.
To restore to the initial state (with all options available), the user can click on [Clear].
FIG. 86 shows selection of the option ‘Recruitment Process’ from the section ‘I want to understand’.
FIG. 87 shows further selection of the option ‘strengths’ from the section ‘By looking at’ and the option ‘industry sector’ from the section ‘Benchmarked by’.
In some cases (where there are benchmark variants), clicking an option from every section may not result in a singe benchmark being selected. This may result in a fourth section or pop-up selection to choose the variant.
Once user has selected an option from each section (identifying a single Benchmark Template, click [Go]. The selection is then saved to the database as a Saved Query. Chart parameters (XML) are generated (from Saved Query just created, not values on screen). Relevant content (links, text, images, etc.) may also be used to create an intermediate xml. The Chart and Content xml are saved to the database (linked to Saved Query). The Chart display is rendered from Chart and Content xml and displayed in chart section of the page (iframe). Once the xml is saved (cached), it can be reused with minimal database access. The cache may be cleared every time related data (benchmarks or mete data) is updated.
FIG. 88 shows a display with the chart. An option to change the chart type is provided within the iframe. Options to filter and add projects are provided outside iframe (features to update Saved Query).
FIG. 89 shows a display with a dialog box that is opened upon clicking on the [Filter] button in FIG. 121, and selecting ‘Industry Sector’ from the available filters.
Preferably the display uses checkboxes instead of radio buttons, so that a selection can be unchecked. Preferable addition of a category to multiple bars is prohibited. The [Save] label on the command button may alternatively be labelled as [Submit]. Assignment to bars may be more usable to ensure it is easy to see what is going on. As ‘Industry sector’ matches a Template Data Type, the user is allowed to select bar (in FIG. 122: 1, 2, or 3). The button ‘Select All’ selects all values not already selected and places them in bar 1. An option ‘Clear All’ (not shown in FIG. 122) deselects all values. Having no options selected results in no filter for this data type (corresponding to all data). Selecting (ticking) all will also result in all data being retrieved for this data type but won't include data for any new category (eg. Construction) added in the future.
FIG. 90 shows a display with a dialog box that is opened upon clicking on the [Filter] button in FIG. 121, and selecting ‘geography’ from the available filters. As Geography does not match Template Data Type (which is Industry sector), the user has no option to select bar(s). This selection is used to restrict data that is available to the chart. An option ‘Clear All’ (not shown in FIG. 123) deselects all values. If [Cancel] is clicked, no changes are applied to the saved Query.
FIG. 91 shows how if [Save] is clicked changes are written to the database, cache is cleared and the chart regenerated.
FIG. 92 shows a display with a dialog box that is opened upon clicking on the [Add Projects] button. The [Select All], and [Cancel] buttons perform analogous operations as described for the filter dialogue boxes. The assignment of projects to bars is analogous to filter dialogue boxes. Search options may be provided to search for projects. A term other than ‘project’ may be used. More project data (for instance project date) may be shown. Only projects that match a filter (the filter having been fixed and selected) may be shown. For example, a fixed filter may limit results to Assessment_A and UK. A Selected filter may limit the results to Marketing. In this case only projects with results for Marketing, Assessment_A and UK are shown.
FIG. 93 shows how clicking on the [Save] button saves changes to the database and regenerate the chart.
In the example described here projects are not split between sectors (e.g. Marketing within Project set 1, and Finance within Project set 1). However, both projects are filtered on (limited to) Marketing and Finance. For single projects, the project name could be displayed in the key. For multiple projects, a mouse over, hover could list project names. When multiple categories are added to a bar, names may be shown comma separated with mouse over, hover box for long strings. The filters used may be shown within the chart area.
The dialogue box that is opened upon clicking the [Save] button may include something to allow Save As Name. Access Option may be used to define who may access the saved benchmark, for example: User, Group, or Common. An option may be provided to admin users only that allows assigning a saved Query to Sections. An option may be provided to admin users only that allows assigning a saved Query to Propositions.
FIG. 94 shows how in a displayed chart, clicking on a single bar (or an equivalent area on an alternative chart) may open a pop up with corresponding content and option to drill down on allowed data types.
My Saved Benchmarks
This function is available to authorised users, but not to unauthorised users. Unauthorised users can only see public benchmarks and not save any views.
FIG. 95 shows an example of a My Saved Benchmarks page. The term “My Saved Benchmarks” may be replaces with a different name. Benchmarks may be split between general Benchmarks and a user's own queries. General benchmarks may be grouped into sections (with section headers). An option may be provided to filter benchmarks on Proposition. Some benchmarks may be highlighted (featured). A link to “Latest 10” benchmarks accessed may be provided.
If a user clicks on [Edit] then a copy of the corresponding saved query is created into a draft query (query with name=null and owned by the current user). The name of the query being edited (on the draft query) may be retained for saving back to the original. Then the operation links to the Build Template tab (where the draft query can be updated).
If a user clicks on [Deactivate] then ‘active’ is set to ‘false’ on the corresponding saved query. If a user clicks [Delete] then the corresponding saved template is deleted (after displaying a warning and getting confirmation). If a user clicks on [Copy] then the user is prompted for a new name and a copy of the corresponding saved query is created. A New Name is mandatory and must be unique. Alternatively, instead of the Copy function, the Edit function may allow saving a query under a different name.
If a user clicks on a saved query name, then the chart area is populated using a Build Page process, (using an iframe and populating it using a URL with parameters: Saved Query ID and Saved Query Token).
FIG. 96 shows a display with the chart area populated.
FIGS. 97 to 100 show further aspects of the analytics system.
In summary, main features of the invention may include one or more of the following:                An application (or suitable hardware) which enables clients to inquire about their status and view a series of “benchstrength” displays (essentially graphs) drawn from assessment data of various classes        The assessment data is converted to a number of proprietary metrics, for example: People Capital, Pipeline Index and the Risk Index        These metrics allow clients to answer a number of questions related to “talent acquisition” and “talent mobility” (examples of which are described in the above)        The client can access insights from the application in two ways:                    the first more general way is via a “My Talent Strategy” view which enables them to explore the benchmark data primarily in terms of industry sector and within that by geography and business function using a number of filters;            the second is by loading their own data to enable them to benchmark themselves with their data being aggregated (i.e. not giving access to an individual)                        The client can run filters (industry sector, geography and business function) against their data which is organised inside the application in terms of projects defined by the user        They are able to save and export (e.g. print or save a soft copy) of the analytics they have performed        The application is available online        
It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention. For example, rather than being used in the context of an organisation, the present invention could be used in the context of industrial devices. In an example, the performance of a device is measured and supplemented with metadata that may further specify the device; all performance measurements and metadata are pooled, and from this pool groups can be retrieved. The performance data of devices that have a particular characteristic in common (such as size, make, version, etc) can be compared—as a group—to a group of performance measurements a user has undertaken.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.