TalentNeuron starts by centering on a talent pool for a specific domain like software development (pan-industry) or Insurance (industry-specific) and a particular role or set of roles (mapped to TN’s proprietary roles + skills Taxonomy). Then, we employ an experience level assumption (e.g., 3-7 years) for each role studied. From there, our web crawlers and big data harvester models draw on the following external sources to estimate salary ranges and conduct multiple layers of data and human validation.
- Job Descriptions – About 30% of the JD’s we harvest from corporate sites, aggregators, etc. contain base salary information. We use these sources for our baseline calculations
- Self-reported data – We aggregate large data sets from self-reported salary data sources, such as PayScale, salary.com, and Glassdoor.com. These are websites where employees go and contribute their salary information to get benchmarks. This is step 1 in validating initial estimates
- Government Sites – Some of the Government organizations (e.g. United States Bureau of Labor Statistics) provide salary data at job category level. We use this for triangulation and step 2 in estimate validation.
- 3rd Party Sources – 3rd party reports which are published media and summary reports from Consultants (that offer market pricing surveys) are used for step 3 in validation.
- Cost of Living Index and Pay differential data – We triangulate the information with cost of living and pay differential index sources to make more accurate calibrations as a final step in data validation.
The last step is cleaning the data of outliers, which is done by our quantitative data scientists as a final (human) step in validation. Through this estimation and validation process, we compute a salary range – including the 10th percentile, median/50th percentile and 90th percentile for roles we study. Salary ranges represent the average for a role within a given location. Note that TN data are not a replacement for Market