Filter by topic and date
IETF 113 post-meeting survey
- Jay DaleyIETF Executive Director
2 May 2022
The results from our IETF 113 post-meeting survey are now available.
The survey results for the IETF 113 post-meeting survey are now available on a web-based interactive dashboard. As always, we are very grateful for the detailed feedback that we have received and will continue to process over the next few months. The commentary below highlights where changes we have made based on feedback have been a success, and areas we still need to work on.
In total 166 responses were received. Of those 164 participated in IETF 113 from a population of 1273 giving a margin of error of +/- 7.15%. The total number of meeting participants was higher than for IETF 112 (1175 for IETF 112, 1329 for IETF 111, 1196 for IETF 110) while the survey response rate is at the same level as for IETF 112 (157) and IETF 111 (166), which were both significantly down on previous meetings (299 for IETF 110, 258 for IETF 109).
The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1. While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages.
In this commentary a comparison is made with the IETF 112 Meeting Survey results using a comparison of means that assumes the two samples are independent even though they’re not but neither are they fully dependent. A dependent means calculation may give a different result. Some comparisons may be made using a comparison of proportions.
The mean satisfaction score for IETF 113 (Q10) was 4.36 with 90.85% either ‘Satisfied’ or ‘Very satisfied’. This is a statistically significant increase from the 4.15 mean satisfaction score for IETF 112.
The geographic spread of participants (Q1) was similar to IETF 112 and most of our previous meetings. More newcomers particpated in IETF 113 (Q4). For this survey we added a new question about gender and the results are roughly in line with those of our 2021 community survey, and a reminder of the exceptionally poor gender balance within the IETF as 90% +/-7% (83%-97%) of IETF 113 participants were men.
The questions in this section have now been reduced to just three with no questions about specific resources provided for preparation. Overall satisfaction with everything we provide was good at 4.46 (Q6) for IETF 113, and preparedness (Q8) was at 3.18 (this is a four point scale question and the score does not fit the scale above) about the same as the 3.19 score for IETF 112.
We added a question (Q52) only shown to newcomers (participated in less than 6 meetings) about the sessions and materials provided for the them. The standout positive answer was for onsite quick connections with an excellent satisfaction score of 4.50, while the noticeable poor results were the interactive sessions for newcomers.
Satisfaction with the agenda
Overall satisfaction with the IETF 113 agenda was good at 4.16 (Q12), slightly but not significantly more than IETF 112 at 4.11 (Q12). Looking at the individual parts of the agenda (Q11), some of the satisfaction scores had a statistically significant improvements reflecting presumably the benefits of meeting in person. In particular the satisfaction score for opportunities for social interaction was 3.51, up from 2.79, though that's still only just in the acceptable range. Satisfaction for HotRFC was up to 4.17 from 3.54, while for the IETF 113 plenary it was 3.94, similar to the 3.96 for IETF 112 when the plenary was held in the week before the meeting.
Satisfaction with the structure of the meeting
Overall satisfaction with the structure of the IETF 113 meeting (Q14) was good at 4.26, broadly in line with 4.23 for IETF 112. Looking at the individual parts (Q13) there was only one statistically significant increases in satisfaction, for the starting time of the meeting with a good score of 4.12 up from an acceptable 3.95. The 60/120 minute sessions, 30/60 minutes breaks, length of the day and 5+2 day meeting are all in the good range (4.31, 4.16, 4.20 and 4.23 respectively).
36% experienced no session conflicts (Q18) and 19% just one conflict, both very close to IETF 112 results. Satisfaction at conflict avoidance was down to 3.89 (Q20) from 4.00 for IETF 112, but not statistically significant. This confirms that moving the plenary for the previous meeting did not materially improve conflicts but rather the long term improvement is due to better scheduling.
Satisfaction with Meetecho remains good at 4.36, while satisfaction with the audio streams and YouTube dropped back after the jump for IETF 112 (Q22) (4.14 and 4,.25 respectively and both 4.41 for IETF 112). Satisfaction with Gather fell even further to a poor 3.04, from 3.40. Jabber remains in the acceptable range at 3.80.
We added a new question on satisfaction with hybrid participation facilities (Q24b) and the integration queue using Meetecho was good at 4.45 and the provision for remote chairs was also good at 4.34. However the mobile "lite" version of Meetecho only had a satisfaction score of 3.44, which is poor. The next version of this tool will include a number of new features identified by participants in the comments.
We added a new question on COVID management (Q24a) with the availability of COVID testing and communication regarding COVID both rated as excellent (4.62 and 4.59 respectively). Compliance with mask wearing and social distancing both rated as good (4.44 and 4.40 respectively). Given the complexities of this, this result is probably the best we could expect.
Satisfaction with our response to problem reporting dropped to 4.25 from 4.43 for IETF 112 (Q25) though the number of people responding to this question remains small and the drop is not statistically significant.
High satisfaction with the meeting and the decision to meet in person was reflected in the positive comments, however a number of remote participants felt it had an adverse impact on them. There were multiple suggestions on improving the mobile Meetecho that are now being implemented.