Filter by topic and date
IETF 108 Meeting Survey
- Jay DaleyIETF Executive Director
13 Aug 2020
The full results of our post-meeting survey for IETF 108 are now available along with an extended commentary below.
The full results of our post-meeting survey for IETF 108 are now available for you to peruse as 116 page PDF. The commentary below pulls out some key highlights from the survey and identifies some areas of work if we are to hold another fully online IETF meeting. There is so much feedback that it’s not possible to recognise every individual suggestion but they will all be considered and we are very grateful for such detailed feedback.
This commentary references the following surveys
This survey received 382 responses, 369 of whom participated in IETF 108, which is excellent for a meeting with approximately 1100 participants. This gives us a worst case margin of error of just over +/- 4% for the answers provided on participation. By contrast, the survey for IETF 106 had 201 responses giving a margin of error of over +/- 6%.
The geographic spread of participants was similar to that we get for a European in-person meeting with 44% from the US and Canada, 37% from Europe, 12% from Asia and the remaining 5%/6% from the other regions (Q1).
Participants tend to be very engaged in the IETF with 86% having attended a WG/BoF in the last year, 68 having spoken at a WG/BoF in the last year and 60% being authors of an active I-D (Q2). 54% have participated in 11+ IETF meetings and a further 16% in 6-10 meetings (Q4).
Of the 13 people who answered but did not participate the reasons for not participating varied with the time of the day and conflicts both receiving 4 responses (Q5).
81% were either satisfied or very satisfied with the meeting and only 4% either dissatisfied or very dissatisfied (Q6). This compares with 73% for IETF 107 (we did not ask that question for IETF 106 or previous in-person meetings). However, 65% felt it was less productive or much less productive than an in-person meeting (Q7).
93% considered themselves well prepared for this meeting (Q10) and the 137 people, 37% of the total, that participated in the testing sessions were generally pleased, with a total of 81% either satisfied or very satisfied (Q12). Similarly for the guides, which 256 people read, 69% of the total, and with 86% either satisfied or very satisfied (Q15).
Some important feedback received here (Q10, Q11, Q13, Q14) is the need for more testing sessions and earlier before the meeting, and for the guides to be shorter and more accessible during the meeting (Q16).
Satisfaction with the agenda
Unfortunately the responses to the question about specific parts of the agenda (Q18) are unreliable as it looks like a number of people answered “neither satisfied nor dissatisfied” for parts of the agenda they did not attend rather than leaving that line blank. This is doubly unfortunate as we asked about the importance of these parts of the agenda in the meeting planning survey and we could have carried out an importance-satisfaction gap analysis had these figures been reliable. Next time we will add a “N/A” column to fix that.
If we exclude the “neither satisfied nor dissatisfied” answers and look solely at the difference between satisfaction vs dissatisfaction then two parts of the agenda stand out with low satisfaction - Side Meetings and Social Interaction.
The follow-up open ended question (Q19) has a wealth of detail and some key points from that are:
- Lots of views on the importance of social interaction and the viability of gather.town as a substitute. More on this in the section below specifically about gather.town.
- A number of people feel that an online meeting will never be a substitute for an in-person meeting, which is partially supported by the answer around productivity (Q7).
- Multiple views on the lack of support for side meetings and their importance for the meeting agenda.
Satisfaction with the agenda overall was at 78% (Q20).
Satisfaction with the meeting structure
We asked about the structure of the meeting and the 5 day meeting length was pretty well received with only 3% dissatisfied or very dissatisfied and the overall length of the day with only 5% dissatisfied or very dissatisfied. The 50/100 minute session lengths, 20 minute breaks and 8 parallel tracks all had total dissatisfaction in the region of 10%. The biggest issue was the Madrid time zone which had a total dissatisfaction response of 14% (Q21).
Overall 83% were either satisfied or very dissatisfied with the structure of IETF 108 and 4% either dissatisfied or very dissatisfied (Q22).
We asked for open ended comments about agenda and structure (Q24) and a number of key points emerged:
- Several people complained about how difficult the timezone was for them, but a number also sensibly understood that there will always be a group of participants for whom this is a problem.
- There were a number of comments about the sessions lengths being too small, which could be down to insufficient time being requested or other reasons, as explored in a section below.
- There were multiple suggestions about alternative structures, but as explained in an earlier blog post this structure was based on the most preferred options from the planning survey.
- Suggestions that we make some of the breaks a bit longer.
- Lots of support for the current structure.
The takeaway from this is that some improvement may be possible if we have another online meeting, but the question of where to make a change is not clear.
Finally in this section, we asked about the structure of IETF 108 (one full meeting) compared to that of IETF 107 (slim meeting followed by multiple interims in the weeks after) and the structure of IETF 108 was preferred by 82% (Q23). We deliberately did not ask about other meeting structures - such as a two week meeting, or a new asynchronous form of meeting as we only wanted answers from actual experience rather than theoretical and we’ve only had two online meetings to date.
The majority of participants, 72%, participated in 2-10 sessions with 22% participating in 11+ (Q25) . Unfortunately, we don’t have any comparable data from in-person meetings to see how that differs.
60% reported that they experienced agenda conflicts (Q26) which is almost identical to the percentage reported for IETF 106 at 59%, though it appears that each person with conflicts had more conflicts during IETF 108 (Q30) than IETF 106 (questions are not directly comparable). We have a list of those sessions and considerable work has now been completed on the agenda tool to try and reduce this problem.
58% reported that sessions they were in ran out of time (Q27) and the top answers for the reasons why (Q33) (this question allowed multiple answers and so the percentages do not add up to 100%) were the WG chairs not managing time appropriately at 35%, the session not being allocated enough time at 34%, people speaking at the mic for too long at 26% and technical problems at 24%.
The open ended answers to that question have a lot to say about how presentations use more time than is necessary and how difficult it is to assess in advance how much time will be needed. There were also a number of comments about how the discussions were very lively and/or detailed (in a positive way) and how that used up the time.
This was a new set of questions aimed at helping us understand how we can improve our various helpdesk services. 19% of people told us they reported a problem (Q29) and they used a wide variety of channels including directly reaching out to the Meetecho team (and thereby bypassing the ticketing system used to track requests). 80% were either satisfied or very satisfied and 8% either dissatisfied or very dissatisfied (Q35) showing that work is needed to reduce the level of dissatisfaction. The suggestions on how to do that focused mainly on the complexity of the various ways to report a problem and the lack of clarity on how to report a Meetecho problem. This suggests the need for a major simplification of how people report problems.
56% were satisfied with Meetecho and 27% very satisfied to give a total satisfaction of 84%, with only 1% very dissatisfied and 6% dissatisfied (Q37) which is much better than might be expected given the number of comments in the survey about Meetecho problems.
Looking at the individual features those with the greatest total dissatisfaction were the virtual hum tool at 27% and closing the session 5 minutes after time at 26%, followed by the overall user interface at 15% and the integrated notes at 15% (Q39). These concerns are mirrored in the 191 open ended comments (Q40), including many suggestions on how to improve the experience. The Meetecho team will be considering this alongside the extensive feedback they received during the meeting.
gather.town was used by 45% of respondents (Q28). As the number of simultaneous users never exceeded 80 that suggests that many people tried it for only a short period of time. 53% were either satisfied or very satisfied, which is a very low rating and while the technology worked well the various elements of user interaction had a general dissatisfaction at around 15% (Q44).
The open ended responses on how to improve gather.town focused primarily on how we implemented it rather than the technology, including what time we made available to use it. There were also a number of people who felt that the interface, and in some cases the whole concept, were simply inappropriate no matter how we customised it.
gather.town could possibly turn into a very useful tool for restoring some of the lost social interactions, but significant change is needed if we want to run this experiment again.
The last open ended question (Q48) gave a chance for any further feedback and many people took the time to thank the organisers and commend the overall execution of the meeting, which was very thoughtful. Many also noted their view that an online meeting can never replace an in-person meeting.
A brief note about our survey tool
We are not that happy with our current survey tool as it has a number of limitations and missing features and so we hope at some point to switch to a new tool to improve both the user experience of the survey and the quality of the data we get out of it. Until then, apologies for the apparent crudeness of some of the results.
Thanks to the sponsors!
Finally, a big thanks to the sponsors that made this meeting possible:
- Ericsson our Meeting Host
- Akamai our Silver Sponsor
- ICANN our Hackathon Sponsor
- Google, Fastly and Futurewei as our Fee Waiver Sponsors
- Cisco and Juniper as our Equipment Sponsors
- The IPv6 Company and Colt as our Local Sponsors
- and our other Global Hosts: Cisco, Comcast, Huawei, Juniper, NBC Universal, Nokia