Our default engagement survey is designed to complement our career drivers. The factors and questions are aligned to the categories and main concepts underlying drivers.
USERS
Engage only knows about the users who were part of the selected group at the time that the survey starts, and those users' direct managers. When a survey officially starts, it finds all of the users that are part of the group and also looks up their current direct manager. That information is stored in the survey and it drives all of the reporting for this survey.
This is all in reference to the single instance of the survey. If the survey is scheduled for future recurrences, then the members are re-queried at the start of each run.
Here are some examples where this is relevant:
- If a user changes managers during the Engage survey, their results will still be grouped under their old manager. The same is true if any of their custom fields changed values during that time (i.e. changed department or location). Everything is a snapshot in time at the start of the survey.
- Engage surveys were intended to be sent to an entire company or department. Sending them to any random subset can cause weird behaviors. If a particular manager was not included in the survey, but their direct reports and their upstream were included, then their upstream managers were unable to view results from this person's team.
- EXAMPLE: Let's say there is a user who is a manager and named "A", one of their direct reports is "B", and the manager to A is "C". If A was included in the survey, then it will pull the data that B reports to A and A reports to C. Then, when C tries to view the results, it checks these references to see who is in their downstream (C → A → B) and they can see A's reports. Since A was not included in the survey, then it only pulled that B reports to A, but it never found out that A reports to C. This leaves a break in the chain, and there is nothing connecting B to C, which is why C could not see the results.
- Similarly, if only a team, department, or subset of employees was included in a survey, then higher-level managers can not view the results. Again, it only pulls people's direct managers, so the manager of the highest-level person in the survey would be the highest person who can see the results. Let's say director “D” runs a department, and the survey was sent to D's whole department, including D. There will be a reference in the survey that D's manager is E, so E will be able to see the results, but anyone upstream of E will not.
DESIGN
Factors are essentially a grouping of questions. It’s intended that you bring together several questions that get at different angles of the same concept and group them under a factor (named after that concept).
Our default Engage survey is set up in this way. This is useful if you want to change the exact questions in a survey over time but you want to be able to compare answers to previous surveys. You can keep the factors the same but change questions.
Important things to consider in regard to factors:
- Engagement admins can see results by both factor AND by individual question, but managers only see factors on their dashboard. To get around this, we’ve seen some customers do a 1:1 ratio of factors to questions, which kind of negates the value of factors and also makes for an unwieldy manager results graph.
- There is a powerful feature in Engage that will show the correlations between factors. This is very useful if your engagement survey (like our default one) has some factors that are essentially “outputs”
- EXAMPLE: Consider an eNPS question like “Would you recommend working at this company to a friend?” or a question about retention like “Are you looking for other opportunities?” These are not obviously the same as an actual output, which would be whether individuals actually leave the company, but they can sub in for that output metric. You can then see statistically which other factors are most strongly correlated with retention or recognition.
The anonymity threshold in Engage is five.
This means that if any scope of users would include responses from less than five distinct users, then it won't show the results. This means that if a manager has five direct reports and not all of them respond, they won't be able to see any results for the survey. Likewise, If a higher-level manager filters down to a manager or department that has less than five respondents, they won't see data for that view.
CONFIGURABILITY
There is the option to run both recurring and non-recurring surveys.
A live survey, one that is either non-recurring or the active survey of a recurring setup, cannot be edited. Once the current distribution starts, it is snapshotted and can't be modified, including changes to items like questions, dates, people, and management.
It is possible to edit the recurring survey template. Note that changes will not be reflected in the active distribution, but you can expect those changes to go into effect in the next scheduled recurrence.
DATA
Only likert questions are counted towards the scores and aggregated results. Other types of questions (multiple choice, free text) can be viewed in the “Questions” tab, but they cannot be quantified to count towards the summed results.
The “by group” heatmap in the Engage Admin level reports is not subject to the anonymity threshold. Engage groups unique values within custom fields to produce each group (much like a Pivot Table in Excel/Google Sheets). This is unrelated to the Learn groups and smart groups.
If there is a value that is in use by less than five users, it will break it out still and display the results. For example, if you have a custom field for location, Engage will "group" all the identical values. Note that if you have a custom field with over 100 values, then it will not be included in the survey results since it is not practical for this type of reporting (i.e. a high-level people manager could not see all the results).
There is no way to export Engage reports at this time.