Game Production Community Podcast
Game Production Community Podcast
Playtesting Masterclass with Steve Bromley
In this episode, Juney hosted game industry veteran, games user research consultant, author of the “How to be a Games User Researcher” book and most recently creator of “The Playtest Kit”, Steve Bromley!
During this session, Steve offered solutions to many of the barriers to regular playtesting, such as “Where can I find playtesters”, “When should I run a playtest” and “How should I handle player feedback”. There was a live Q&A after the event.
Slides can be found here: https://gamesuserresearch.com/gpc/
Transcript can be found here: https://www.buzzsprout.com/1055908/10930310
Please share with peers that could benefit from these expert insights & experiences!
Organised with the amazing support of Angelo Diktopoulos & our Supporters ♡
We would love feedback: Reach out to us on Discord! For more articles and interviews with producers & others in the games industry, check out our website.
ABOUT THE GAME PRODUCTION COMMUNITY
Our Vision is to enable games industry professionals to plan better, lead better and stop the crunch culture, through building a community of support, knowledge and training.
We would love feedback: Reach out to us on Discord! For more articles and interviews with producers & others in the games industry, check out our affiliate website.
ABOUT THE GAME PRODUCTION COMMUNITY
“Level up together with our inclusive, growth minded community as we share game production knowledge & experiences, advocate team health & happiness and lead the way to diverse, high quality, sustainable game development.”
Juney Dijkstra 0:00
Hello and welcome to the game production community. Today we're hosting a playtesting masterclass, featuring game industry veteran games user research consultant, author of the how to be a game user researcher book and most recently, creator of the play test kit, Steve Bromley. Over the course of the session, Steve Bromley will offer solutions to many of the barriers to regular play testing, such as where can I find play testers? When should I run a play test? And how should I gather feedback. There will be a live q&a after so feel free to vote for questions and post your own in the event QA Channel. In the meantime, we'd love for you to share your thoughts in the event chat channel and engage with each other along the way. Please note that this session will be recorded and published on the game production podcast as soon as we're able, I must thank our wonderful supporters, and in particular, our event producer Angelo for making this event happen. And with that, I would like to hand the stage well, in this case, the voice channel to Steve.
Steve Bromley 0:55
Fantastic. Thank you very much. And thank you everyone for joining today. So I had today when we talk about playtesting a subject that's really I'm really enthusiastic about. And I'm really happy to be to share this stuff with you
Steve Bromley 1:07
A bit about my background and why I'm here today talking about playtesting. So I am a user researcher working in games, you might be familiar with user researchers. But the idea is we plan and run play test studies all day for a variety of game teams. I started with a team at PlayStation for the first five years of my career, which is a really nice place to learn how to be a games user researcher, PlayStation as a publisher obviously have a huge variety of studios from huge ones like horizon to tiny teams working on small PlayStation Vita games, and you get great understanding of the range of playtesting experiences and the types of things that you need to deal with from that.
Steve Bromley 1:48
I left PlayStation about five years ago. Since then, I've been working with a variety of game team sizes, from small indies to AAA studios to run play tests and help them establish user research as a function. I also set up a mentoring scheme for games research and people who want to do play testing and games user research as a full time career. And that led into as Juney Dijkstra mentioned, the book that I released last year called How to be a games user researcher, which explains what a career in games user researchs look like? What skills do you need? How would you go about getting those skills. I've also spent a lot of this year on the topic of what we'll be talking about today, which trying to make playtesting accessible to every other game developer. Not everyone has
Steve Bromley 2:56
So whenever I talk about playtesting no one's against the idea. And if I'm - everyone on this call recognises playtesting it's a good thing to do and would say yes, I love playtesting. I think it's really important. It's well understood in game development. That iteration is the way that you find fun, that you finesse your ideas. And the more iteration you can do the higher quality of the game that you're making at launch. And play testing is one of those things that inspires that iteration. By looking at what players understand what players can do, what players are finding difficult, you come up with a lot of inspiration for where you should be putting your attention and where iteration should be happening. So no one's against playtesting as an idea, but it doesn't happen as much as you think it would, or as much as people want to be doing it. Because user research and play testing takes a whole bunch of time takes a whole bunch of money to do. teams aren't in the position where they can regularly do play tests or run the number of play tests or user research studies that they want to be doing.
Steve Bromley 4:00
As a problem that really interests me and something that I wanted to explore deeper. And so I spent a lot of last year interviewing people who have to run playtests, but it's not their full time job. So leaving that user research community behind, and I started to interview solo devs designers, UX people, producers, community managers, QA people, lots of people who just have to run play tests at their job or want to be running play tests. And I want to understand, what is it that makes playtesting difficult? What are the bits that mean you can't run them as often as you want to? And what are some of the challenges that we can fix to try and make user research and play testing more accessible to all of our teams.
Steve Bromley 4:44
What I heard from those sessions is playtesting is hard. There's difficulties finding participants, there's difficulties working out when you should play tests. There's difficulties working out how to handle the play test data. And as UXers and user researchers
Steve Bromley 5:00
We think we have the answers to some of those problems. Because we're running user research all the time. It's our day jobs. We do it every day, we're hitting the same roadblocks. And we've worked out what's the best way of overcoming it, or what can you do about it. But what I want to achieve today is start to explain some of that to share what we've learned from the user research community and the UX community, to teams who just have to run play tests who aren't going to be dedicated user researchers. So that's what we're going to cover today, five lessons about what's hard about a testing, and what we can do to make it easier.
Steve Bromley 5:36
And we're going to frame all of this around the play test process. So regardless of whether you're a big team doing hundreds of play tests, or you're just a solo Dev, doing your first protest, you're likely to go through some steps where you're planning where you're working out. What do we need to play tests? Are we ready for a play tests, going through some preparation stages where you want to identify? What's the right method, we're going to use getting the build ready, designing the tasks and the things you're going to ask? Then you're going to do some sort of data collection. That might be interviews, or watching people play or reading survey responses, but getting back the raw data from your participants. And then you're going to decide to take an action, you've learned something about your game, you need to decide what are we going to do about what we've learnt from our game? At the end, I'll share the checklist version of this just so you have it if it's helpful, where you can see what are the steps that people usually go through when they're doing a play test. But our focus today is five specific areas of that play test process. First, we're going to talk about the challenges of finding high quality play testers and how you can go about finding those play testers. Next, we're going to talk about working out the right time to play test. When is it the right moment to run your first play test? When did you run your next play test after that? Then we're going to look at how do you make the most of your players time, once you've got some people committed to play testing? What are you actually going to do with them? And what can we do that makes gets the most value from that play test, we're going to run. Play testing generates a huge amount of data, we're going to look at how to handle that data and what we can do with it. And last of all, we're going to talk about deciding what to do next, you've learned something about your game, you've learned some problems or some things you need to work on. How do we decide what we're going to do about them on the play test process that looks something like this. So we're talking about the deciding the poacher, the focus, the focus of your play test, we'll talk about the appropriate research methods. We're going to start with talking about recruiting play testers. For reasons that I'll explain in a second, we're going to talk about that step about analysing the raw data. And we'll talk about turning your conclusions into actions. Although because we've only got an hour together today, we won't get to touch on all those other topics, which are equally important for playtesting. So first of all, I want to talk about how do you find high quality play testers?
Steve Bromley 8:13
During those interviews that I had with game designers and developers, I asked them, How do you currently find your play testers? What was very common was the idea of, we just use whatever's convenient, we might ask our friends and family because there were around if some teams have links with other game developers, either game development communities or university game development courses, and they would ask them to play test their games and give them feedback. Others they would ask their community, they would have some existing fans who enjoyed their game and played their game, and asked them to play test their game.
Steve Bromley 8:51
Unfortunately, as user researchers, we know, there are some challenges with using these types of users. Often, they're very different from the type of people who are going to buy your game. At the end, they know more about the games than a typical player would, if they're a game developer, they're coming at it with their with their game development experience and their game development expertise. If they're your friends, they've heard you talk about the game. And they know what you think the vision of the game is, if they're your existing community, they know they've been following the development of the game. And they know lots of things that a regular player wouldn't know. Because of this, their opinions are different from a typical player, their behaviour is different. And it all wrapped up into their feedback is going to be different from the feedback of a genuine player who will play your game. And that's going to be a challenge. That means that when we play test, although we're learning, we're hearing some feedback, we're learning some things. We're not confident this actually represents the feedback and the behaviour and the views of the real people who are going to buy our game. And some of the stories I heard from game developers shows some of the problems with that. For example, just using your ex
Steve Bromley 10:00
In community to balance the difficulty of your game leads to games that are much too hard and intimidating for new players. That's a problem and something that we want to talk about how to fix today. Which brings me to my first fix I wanted to cover today, which is the idea of creating a pool of play testers that you can call upon whenever you need to run a play test. In the user research community, we call this a panel, a group of participants who are ready to take part and play tests when you need them. And it although it is some upfront work, it's the type of thing that once you put in half a day's of upfront work to prepare and do this, it pays off throughout that game development process. Every time you need to run a play test, you've got a group of people ready, it's not that much effort to refresh it or to build upon it. And so it is one of those tasks that creates enormous value as you run more and more play test sites down the road. And the steps we're going to talk about for finding those protesters is first of all, defining our target players working out where we're going to keep them once you've got them, creating an incentive, reason why they want to take part in your test of finding where those players are and getting hold of them for this, and then work out how to convince them to take part. So first of all, the idea of defining your target player, we talked about why game developers were your friends and not suitable. So we need to define who a suitable play tester is. What we know about our play testers is what we know about the people who are going to buy our game is they're going to buy our game, that probably means they also buy and play similar games to our games. And that's enough that we can start to find them. We can think about someone who's going to play my game. What other games do I think they're going to play? Start to list them out? We could think about, where are they going to be buying games? What's their process for buying games, if you're making a game that's advertised on on Steam, your target player will be people who buy games from steam. Whereas if your your game is on itch to IO, for example, again, what you know about your players is they're the type person who looks for games on itch to IO. From this, you can start to define your players by saying what I know what, what games they play, and I know where they buy games from. And in a minute, we'll be using that information to work out where to find them. What we recommend, though, is not basing that definition on demographics. A trap I see teams falling into is saying, Well, I think my player is about 18 years old or 18 to 25. I think they're male, I think they are they're based in the United States, which might also might well be true, that isn't helpful for us tracking down our players and finding them and also can quickly lead to stereotyping. I think these days, the actual type of people who play games are a lot broader than people imagine. And so avoiding demographics is a great way to make sure that we are targeting on recruitment by chair that's actually important, rather than falling into those stereotypes.
Steve Bromley 13:11
Okay, so we've thought about who are our players? And what do what are the games that they play. The next thing to do to create this pool of play testers is we need a place to keep them once we've got ahold of them. My recommendation is creating a play test mailing list, there are tools like MailChimp or ConvertKit, which are free until you hit 1000 users I think, and the main list of very reliable ways of keeping hold of players, unlike the idea of pushing them to Twitter, your or your social media account, or even discord. These days, people are on so many discord and follow so many accounts. And with things like Twitter, or Facebook or Instagram, you're beholden to the algorithm deciding whether it's going to tell them about your play test or not. The most reliable way to get ahold of people is by email. So the next step, create a blank mailing list where we're going to put these players once we have them. So we have defined to our players are we know they're people who play similar games to ours, and we've decided where we're going to keep them we've made a blank mailing list that currently has no one on it. Next, we need to think about what can we offer them to make them take part in play tests. This depends on your resources and what you have available. Obviously, if you are a big studio somewhere like PlayStation, you have money and you can give teams or you can give playtesters money to take part in your play tests. But for a lot of teams that isn't feasible. Obviously, we're all tight on money. We can't give away money to play tests all day. And just said we need to be more creative about thinking what do we have that has value to our players that we can offer them to take part in our play test. One thing you could look at his recognition, whether that is a special role on Discord credits in the game, naming characters after the after them in the game, there are ways you can explore giving them recognition for what they're doing, which is valuable to some. Something that I've seen that's been very successful is partnering with other studios who are similar to yours. And coming up with offering keys for games. What you can do with your partner Studios is offer to share keys to play testers. And then when someone signs up take part in a play test, you can offer them a choice of you can have a key from any one of these five or so games for taking part in our play test. What's quite nice about that is because you're offering a range of keys, you are avoiding biassing, your group based on what you're offering, because you're offering a variety of games, and attracting a variety of players based on that. This bit, take some work though, if you do have money, it's a great place to spend money, but without money, you have to be creative and thinking what can we offer our players to take part in our play test. If you don't offer anything, the barrier, the problem is, you're only going to get people who would do this for free. And again, that's a huge bias, you're only going to get your friends or you're only going to get people who like you personally to take part if you're not offering anything to take part in a play test. Next, we need to find these players. And this is often the most difficult step. As we talked about earlier, we know something about these players we know they play it off. It's similar games currently, what we want to do is find their communities or their watering holes. Where are these players hanging out currently, and talking about games? Sometimes that might be obvious if their competitor games have subreddits, you might look on Reddit for a competitive game, and you found a whole bunch of people who might play your game. But if you've got no idea, Google often turned up great places to look, if you type in the name of the competitive game, and then just some added keywords, for example, help forum question community, it turns up all these obscure internet forums where people are talking about your competitive game. And because we know people who play your competitive game might also play your game. These are the types of people who might take part in your play test. I really like help as a thing search for because people asking for help for a competitive game. They're not all they're likely not to be extremely hardcore player who plays a huge amount is the expert player, people asking for help are more likely to be a typical player. And that's really valuable finding those typical core players for your play tests.
Steve Bromley 17:44
Some way, some other creative ways to think about your community or where to find them is thinking about the USP, the unique selling point of your game, what's special about your game, and then think about whether that reveals any non game communities you might want to approach. So it's one less competitive games you might want to look at. If you make a music game, where two people are interested in music Hangouts, they might be interested in a game, let's go find them there. If you've got user generated content, there are communities for user generated content people, your game dinosaurs, and you might find a community about people who care about dinosaurs. All of these are great places to look. And of course, there are also generic places to look, there are game development. For a while there play tests forum to start off with like Reddit are slash playtesting forum, or Reddits. Destroy my game, I think it's destroying my game forum, where people offer playtesting. They're slightly risky, because they usually frequented by only other game devs, which again, as we talked about means you're only getting feedback from other game devs and game devs aren't like your other players. There are, however, other subreddits that aren't only game devs, like generic work for hire communities, or those community ones, where you might be able to find appropriate players. Or you can look at offline places. arcades, coffee shops, university campuses, places where typical players hang out, if you do go broader with these playtesting recruitments. So you want to make sure they are your right type of player. The good thing about the communities for competitive games is you're already pretty sure they're the right type of player. But if you're going really broad, like you're going for a generic game dev community or a work for hire community, or you're going around coffee shops, you want to check that they're your right to have a player by asking them some questions before they take part. This all adds up to the last thing we've worked out where they are. We've worked out who they are, how do we convince them to take part and again, this is an iterative thing that we need to work on. We need to start by writing a post created writing description saying Why'd you want to take part in this play test, highlighting the the USP of your game and what is interesting and compelling, and is an incentive if you're able to offer a code for another game or offer recognition, such as credits, and all of that you want to create an offer that says, Please join my mailing list to take part in play tests in the future. And sign them up to that that play test mailing list. This is an iterative thing, the first go at it, you work out, you can look at people's responses, see what people are saying about it, whether they're rejecting your offer, and then iterate when you join the next community to share it with that group. What you get at the end of this is you end up with a mailing list full of people who are ready to take part in play tests. And then when it's you're coming to run any individual play test, it gets a lot easier. You can use calendar tools, such as Calendly is one where you just open up slots for play tests. And you can just email your mailing list to say, we've opened up five slots for play tests come and join us. And they book their own slots using those tools. Or if you're doing some sort of remote method where you just want to survey or just want them to do something in their own time, you can mail that mailing list directly. Ultimately, recruitment one of the more challenging steps for playtesting. If you're if you're a studio that has a huge amount of money, this is the type of thing you might want to outsource. And there are participant recruitment companies who have handle all of this. But a lot of the teams I work with don't have the money to outsource this, which is why this process of setting up a panel I think, is really important. And the reason it's really important is because if you fail to do this, if it doesn't work, all the other steps of playtesting are affected. If you're not finding the right participants, you're going to get irrelevant feedback, things that aren't don't represent what your real players think. And it's going to be a lot more challenging to get any value from your playtesting process.
Steve Bromley 21:53
So appreciate I went in quite some detail about finding the right type of players, which I think is really important. But that wasn't the only challenge I heard from the developers, I was interviewing about the playtesting process. Second challenge is working out the right time to play test. There, in the chance I had so many reasons why people put off pay testing, they might think the game's not ready yet, I need to wait until I've got a vertical slice, they might assume that it's players can understand it, it's fine. They might think, okay, it will be I should wait until the final graphics are in. And it's no point playtesting until that, but playtesting, as we talked about is an iterative process, the longer we wait, the team's focus will move on, and they'll stop paying attention to those things that I've just built, it'll be harder for us to make changes, because some of the decisions I've made have been bedded in. And also, it just reduces the number of iterations we do in that game. Rather than being on that red line where we're getting better and better each month, we are stuck on that blue line where the game is the same as it currently will be. And we're not doing the amount of iteration. Ultimately, this leads to underwhelming launches, you're gonna find out those problems, whether you play tests or not. But the decision you have to make is do I want to find those problems before launch when I've got time to deal with it, or when I read them in my Metacritic reviews on Steam reviews, or after the game launches. So because of this challenge, the second thing I want to talk about is how to identify those most important things to test early enough that we can do something about them. And the model that I talked to game teams about thorough and telling this is to recognise that every decision they're making in that game design process generates a hypothesis, whether you're deciding what the game should look like or what the game should do, you have some assumptions about what players will understand what players will be able to do, and how players will experiences feature. That the first step then is to look at the decisions a team are currently making regularly reflect on what are we working on? And from that generate some hypotheses about what do we expect players to do? Or what do we expect players to understand? And this can look a bit like this where we have our hypothesis log saying, We think players will do something because of this game design feature. So for example, we think players will know what their health is, because the UI cues are sufficient to indicate when they're going to die. We think players will notice and hit the weak spot because we made it flash reds, we think plays one standard story and unsend This player is lying, because they'll remember what they learnt from the previous scene. From that you can generate a list of hypotheses about your game. And and this is a thing that you can you can rank. You could ask your team some question to ask yourself some questions. Is this hypothesis, a core part of experience? Do we need this to work for the game to be successful? Is it a thing we're able to to make changes too, or is a team not able to make changes? And we've moved on and we can't do anything about it. Is this risky? Is it a thing we've done before? And we we know everything about and there's no risk here? Or is it a thing that it we're uncertain about? We've never done this type of thing before, we don't know if it works or not. And from that, you start to prioritise your hypotheses, you get to work out what things to do, we definitely need to play tests, and what things can we put on the backlog get to if we have loads of time, but it doesn't matter if we don't get there. And I know this is a production community, so you're experts on how to manage a backlog and how to do it. But whether you're managing your backlog on Jira, or Trello, or how have you approached backlog management. Once we've got that hypothesis log, you can take it into into your product management tool, and add it to your backlog. One of the most common delays I hear from these teams is we've got to wait until it's ready. We can't test just this feature in isolation. And I think that's not the model that most successful teams use. If you recognise that we can test one of those single hypotheses in the most gorilla and most easy way possible, we can mock up all this stuff that isn't ready. If it's not relevant to that hypothesis, we can ignore it, we can step in and if we're moderating these studies, so we were with the plate players live, we complete the tutorial and we can explain this stuff, the tutorial would explain it okay, there are bugs because we can explain how to overcome the bugs because we're not looking for bugs.
Steve Bromley 26:34
With when we're focusing on a specific hypothesis, we should be free to mock up all the things that aren't ready, ignore all of that, and make sure that we we don't let that stop us from testing our hypothesis. Again, the issue I see is teams believing they have to wait until they have a vertical slice or wait until the game's far enough and development. And in real life. Often, that's much too late to have effective play tests. And I think the the ethos I want to promote with the teams I work with is even the smallest test is better than nothing. Just getting two or three people just doing very informal tasks, like putting them in front of it and seeing if they understand it or see if they can use it flanks. Is there a problem here? And yes, that's not a perfect research study. Yes, there's a whole lot more you can do. But just by doing these small activities regularly, once a month, let's say it flags, here's where the problems are. And you can go and plan a bigger study when you need it. Because you've worked out what parts of the game need it and where it's not important. Ultimately, by going through this process, we end up with a list of the most important things to play test by recognising our hypotheses by listing our hypotheses and by Ranking our hypotheses by risk. And as early as possible, we'll recognise when they're critical things that we need to be playtesting as early as possible. So the third thing I'm going to talk about today, is making the most of your players time that people believe and fairly, I guess, that playtesting can be difficult. That process I talked about of how do you go and find those playtesters feels like a lot of hassle. actually running a play test, creating a build feels like a big deal, which is true. And because of that, it's important for us to make the most of that time that we do have with players when we have their time. When I was interviewing the developers about how they approached play testing and the challenges of play testing, I asked, okay, how do you currently run your play tests? How do you currently collect your data? And the most common methods I heard was, well, I send out a survey and they fill out the survey and I read the survey to see what was good about the game and what's bad about the game. Or we have a discord channel for play tests. And people just put their comments in the discord channel. And I read through comments every couple of weeks to see what people were saying about our games. Again, this is I can see why teams do this. It's convenient, because it takes it doesn't take a huge amount of effort from us to send out a survey or read a discord channel. But we're missing a huge amount of valuable data by just relying on those methods. Both of those methods require players to recognise the problems they're having, and be able to describe it. That's what we call self report where people are self reporting the problems they have. When people are self reporting, we're missing a lot of really valuable data about actually what are the players doing in the game? What did they How did they approach the challenges they encountered? Did they miss a feature or manic mechanic entirely? Players won't knows they won't be able to tell you on their survey. I didn't know this existed, but you watching a session will see that. Players often don't realise if Miss Under to something and again, from discord feedback or from surveys, you're not going to pick up what they haven't understood. And these methods are reported at the end. So you're only getting what their reflection at the end of the play test session was, you're not seeing their experience throughout, did they know beforehand that this twist in the story was coming? Or? Why did it take them a whole bunch of attempts to complete this puzzle? You missed that by only asking questions at the end. So another challenge we have then is how do we pick the right play test method how to identify what actually should we be doing with our players when when we have them. And this comes back to that decision making process that we talked about earlier. Every time we gameplay decision is made, we have a hypothesis every time and to answer a hypothesis, we need to match it with the right methods. I'll talk about some methods in a second. But making sure that it's the hypothesis that's informing the method is the important part here. Rather than just starting by thinking, I'm going to run a survey or I'm going to do some discord, chat, reads and discord chat.
Steve Bromley 31:08
Different methods have different specialties. Some of them are good for learning what players do. Some of them are good for learning why players do that. And some of them are good for players opinions, what they like, or they don't like. But some of the most common methods that you might want to explore include live observation, which is watching players as they play, and observing the problems that they have pre recorded observation where players are sending you a video of their play through and you can rewatch their video to see what they thought as they played. Surveys, as we talked about, where players mark the game at the end, that gives them ratings. They write what they thought was good or right, what their thoughts bad interviews, where you're actually asking questions live to players to say, what did you think? Why did you think that types of things allow you to probe really deep on the detail of what players are telling you, or analytics where the game itself is capturing player behaviour and telling you what players are doing? There's a whole bunch of methods there. And each of those would take much longer session than this to go into more depth. But I think teams get a huge amount of value, just by watching a few people play. Although teams often default to we're going to run a big survey, we need loads of players, even just watching two or three players play your game, asking them questions as they play to see what it what their understanding is, what's going through their head. Explaining the behaviour that you're seeing them doing, I think gives a much richer form of information and is much more actionable for a team than a survey that 1000 People have filled out, that's often very hard to translate into action. set up to do this can also be really simple. Usability labs feel like a high tech thing. But ultimately, you can just do it with a laptop and running bulbs, for example, to screen cords. If you're lucky enough to be doing this live for you in the same room as a play tester, you can just plug in a second monitor to watch what they're doing. Or you can do all of this remotely, you can set up a zoom call, watch them play over a zoom call, or do the same Google meet, which is free. Both of these also allow you to record the session, which is really helpful. Or if you're in person, again, you can just hand them a mobile phone or a device. As I mentioned, UPS or Windows game bar both allow you to record gameplay and also them talking out loud. And then you get a huge amount of really actionable data that you can use to inform your your iteration in your game development. So ultimately, what I've tried to convey here is the importance of matching the method to your play test goals. What do you want to learn from it? And what is your hypothesis? As a rough rule? If you want to measure something like do players like this? So is it easy? Or is it hard to these are measurement questions, you want to do a survey, if you want to stand if you want to understand what players are doing in your game, you want to watch them play either live or watching pre recorded footage that have recorded themselves. Or if you want to understand why they're doing that thing, you want to ask them either a guess that's most commonly an interview, either after they're played to ask them about it, or asking them questions as they play, to understand why they're doing the things they're doing in the game. And all of these methods can be combined. What I've hoped to get from this section is although teams default to surveys, or discord chats, because they're easy, you're leaving a huge amount of data on the table. And actually, you need to be basing your decisions based on what does our team need to know. And what are our hypotheses, rather than just taking what's easy. A couple more things I want to talk about that we heard from the those interviews with game developers about playtesting. One of the challenges that came up a bunch was there's a lot of data from PTS, and how do we take all that raw data, all those observations and make that into actions and deciding what to do. And it is true play tests generate a huge amount of data. Depending on your methods, you might have players behaviour, things that you saw them do in the game, you might have telemetry or analytics things the game has reported that players are doing, you might have quantitive survey ratings, where you've asked players to rate different bits of the game, you might have qualitative comments where people have said things about the game or written a survey response to what they think about the game. You might have interview data where you've asked them questions about the game. And that's can seem overwhelming. That's a huge amount of information to deal with. And so I want to talk about how do you deal with that play test data?
Steve Bromley 35:48
The first step is always to go back to why are we running this play test? What was the focus of our play test and what we're trying to learn? And again on the I'll share the checklist first at the end of the session, but that first step is usually let's decide the focus of our play tests what we want to learn from play testing today. And this is a great point. Once you've got your data To remind yourself of that, so you don't get distracted. From that data, you want to treat different types of data differently. Some of that data is going to be things that players did, or what I'm going to call behaviour data for the moment. And other things are going to be players opinions or players, comments about the game. And that's going to be opinion data, what players think about the game. And we're going to talk about how to treat each of those differently. First of all, that behaviour data, behaviour data is great. And as a user researcher, a lot of our focus is on Player's Behaviour, what they do in the game, where they got lost, where they didn't see where they're meant to go, what they didn't understand is all stuff that players did or didn't do. Behaviour data is objective, we saw that happen, we know that player got lost. And because it's an objective, it's a truth, it's the safest to take action on some examples, the type of thing that might look like we might have observed, a player failed to find where they should be spending the in game currency, because they didn't see the correct menu option. Or, we observed that players wandered the wrong way in the level because they didn't see the door. These are things that definitely happened and things that we can take action on. The steps to take action on them are reasonably simple. First of all, we need to understand why it happened. So a look back either our notes, saw our videos or reflect on what did we see that caused that issue to occur? And that's always something about the game. What is it about the game that caused that issue to occur? Then we need to decide what's the appropriate fix for that. And we'll talk about fixes in the last section. But work out okay, we know there's an issue it objectively happened. What are we going to do about about this issue, and then prioritise that fix. And again, talking to the game production community today, you are experts on prioritisation, of fixing usability issues versus other things. Behaviour data really simple to action, because we know it's true. The trickier one and the one where I see teams go wrong more often is opinion data. That's when players have told you what they think about the game. They've said, this enemy is annoying, because he's really difficult. Or this game would be better if you had a feat different feature, for example, a shotgun in this example, angry Candy Crush player there. There are extra steps when you're dealing with opinion data. So we can't just go and do what players tell us we should do. Because players aren't experts at game design, we add game developers are the experts on game design. Players don't have enough context to give good suggestions. They don't know what's feasible in the engine. They don't have expertise in design or UX or game development. They don't know what's feasible and the timelines we have, or what's feasible, what have we tried before what had led what iterations have occurred previously, that have led the team to this decision they made today? Because players don't give good suggestions and their feedback is more risky, we need to take some extra steps. First of all, we need to understand what happened in that session that led to players giving that feedback. Again, that looks like remembering what you saw on the session or looking back at your notes or watching the video again, to remember why that feedback or work out why that feedback occurred. And then you think about is that is that experience we want to make? Some games are meant to be hard. You can't if you're making Bloodborne or Elden, ring, something like that. Just because players see that is hard doesn't necessarily mean that we need to make a change, we need to reflect on what is the designers intent? And what's this game trying to achieve, before we decide if it's even worth fixing this issue or not? Or is it just what we're intended to make. That means we can then decide if action is appropriate. And again, this community is an expert at this, we can then work out how important is it to fix this issue. We'll talk about how to fix or an approach to fixing issues. And the second.
Steve Bromley 40:24
Ultimately, the advice that I have for game developers in this space is don't forget that you're the expert, it can be really easy to get on a rabbit hole because players are telling you to make changes to start making those changes that players raise. And that is, that's often a mistake. Because players don't have your expertise. They don't have your context. They won't be able to come up with good suggestions. They can just tell you, there's a problem here. You then want to ignore their suggestion, go back to the root of that problem and understand why that problem why a problem potentially exists and use your design and development expertise to go forward into actually making the change. So the last thing I want to talk about Today was deciding what to do next. And one of the challenges is, okay, we've learned this problem with our game. What do we want to do? How do we decide what we, whether we're going to do something about it? First thing to think about is, what was the impact on the player for this issue? Again, from the play test, you'll have some observations or you have some comments that will be able to tell us what the impact and the player experience was, was it players got lost, but they they quickly found where they're meant to go. And because of that, there wasn't a long term problem test, okay, there's not a huge priority to fix is it players got lost, and then because of that, they never finished the level. That's obviously much more significant. And although I don't cover it in the session today, user, researchers have a series of criteria that we go through, and I can share if someone asks on Twitter after how we assess whether this is a significant issue or not. From that you can work out your most significant issues, and then decide how to fix it. I know everyone usually groaned when we talk about ideation and workshops, and collaboration. But there are a lot of benefits to having facilitated ideation workshops where you can bring in all these disciplines to look at a complete fix. what that might look like is going through individual thinking where everyone thinks about their individual expertise about how to fix this issue, some sort of group flexion, and then as a group deciding what the ultimate solution is on how to fix that game. But regardless, you want to work out as a team, how do we fix that issue. And I personally, as a user researcher, often like working with teams to run that type of ideation help them get to the right solution for their game. Obviously, that's only possible for larger teams where there's more than one people person. If you're a solo developer, you have to ideate all by yourself. prioritisation again I appreciate everyone on this call was the expert on prioritisation. But some of the things as a researcher we might recommend thinking about is we've thought about the impact and the player, we've got a rating, or whether it's a low issue or a high issue for the impact on the player. And then comparing that to effort, how much effort would it be to fix this, and then plotting that can quickly reveal, here's the most important fixes, and that can be taken into your backlog to take forwards.
Steve Bromley 43:36
But ultimately, I hope that some of what we've talked about helps overcome some of the issues that we see from playtesting. iteration as we fly to the beginning, is key to the game development process, and is the way that you can get to high quality games, it's the more iterations you can do. playtesting is an ingredient for for iteration, some fuel or some inspiration for what, what you might want to do. And because of that, I think the more play tests we can run, even if they're that, that Lo Fi gorilla mocked up play tests rather than a big 1000 player survey gets us more iterations and closer to a higher quality game. The last thing I want to leave you with was a few tools. So things that can help. So what I the message, I hope, one of the messages I've talked about today is that playtesting doesn't have to be a big deal. And one of the barriers that we hear from from game teams is just pay testing seems like too much effort to be worth doing it. And I appreciate that. I appreciate that. Game development is a very time pressured environment. It's hard to prioritise things. And I hope some of the techniques we've talked about, at least get you a step towards playtesting being a lower effort activity, and easier to do. Some of the things, the vague shape of the steps that I think teams should be thinking about. It's just thinking about what you want to learn from your play tests, making pragmatic decisions about how to gather that data, it doesn't always have to be a big play test, it can just be, let's put it in front of three people and see what happens. interpreting that data correctly and prioritising it, you'll focus on the most important things, and then making a sensible decision on what to do about the issues you've seen, I think and some of the teams I work with, recognise that, yes, it's not the four years of research process you get with a user researcher. But as a push, you can fit a lot of this into just a single day. And hopefully, it's not a huge commitment to try and do a single day of play testing every month or every two months to increase the number of iterations you go through and to help make games better. Sometimes pay testing can be more complicated. And there are things you might want to do like look at retention studies when people drop off over over months of play. Or you might want to do massive multi seat play tests where you're getting at people to play at the same time together, when you might want to call them specialists. And that's cool. As I mentioned at the beginning, I'll write that book about how to do this as a specialist job, how to be a games user researcher. And there's also very active games research community of consultants and agencies who can help out. I think a link to them on the games user research.com website. But also, if you're ever looking for a consultant, I can put you in touch with people if you're interested. Why I'm talking about this today is because as I flagged at the beginning, pacesetting doesn't have to be big deal. But I recognise there are a huge amount of barriers for normal game developers who aren't doing user research full time, that make it hard. And some of the things that can help are a tools and templates. And the play test kit that I launched earlier this year, is aimed to do that it aims to be here are the tools and templates to get you started so that play testing isn't a big problem. The play test kits, hopefully, it's a repeatable play test process, and just lets you get started with this. I'm not gonna do too much more. But lots of nice people said nice things about it about hey, it's like having a consultant, it makes game development easier. It's practical, it's, it's useful. It's a thing for game developers, not for user research specialists. And if it's thing you're interested in their website is play test kits.com. I mentioned that checklist, there's a place you can sign up on there to get that pts and checklist. If you decide you're interested in this, and thank you for listening for the last hour, you can get a discount with with a code game production, which just for this community, but regardless of you're interested in the play test kit, I am really enthusiastic about play testing, I love talking about it, I love the challenges of it and helping people get better at play testing. I'm really active on Twitter, if you want to follow about play testing and talk about play testing, or just drop me an email when we know about play testing. But I'm just gonna end on saying thank you. Thank you for taking the time to listen today for hearing a bit about play testing and some of the best practice. And I think we have a little bit of time for questions. So thank you very much for your time today. I really appreciate it.
Juney Dijkstra 48:19
Thank you so much, Steve. It's kind of fascinating how hearing you share all of these learnings out loud adds another layer of depth, of course, I've read the material. And I've been doing playtests for a long time and yet listening to you talk about it still triggers new thoughts in my head. So it's like new new bits on top of reading the theory. And that that does lead me to a full disclosure note that I was one of the people that was humbly invited to review the Plato's kit prior to launch. And I have to say I was quite genuine when I wrote that I wish I had this kit when I first started designing interactive experiences a very, very long time ago. So I would strongly recommend it myself. In the meantime, we've got questions coming in in the event QA channel for everyone listening, please add your questions there and of course vote for questions that were posted by others. To increase the chance of having an answered We may not be able to get around to everything, but we'll we'll take a stab at it. For shop. Just super practically. Steve, Will the slides be available publicly after after that?
Steve Bromley 49:30
Yes, yeah, I don't know the best way to share with the community but no problem sharing the slides. I'll follow up with you after, Juney. And then you can share them if you're comfortable doing so.
Juney Dijkstra 49:40
Yeah, absolutely. We'll make sure to share that together with a recording. All right. And that case, moving on to questions that were posted by community members, and I see them all trickling in now. How much information should we divulge to playtesters before they do the play test?
Steve Bromley 49:58
That's a great question. The answer to any sort of user research question and UX question is, it depends. I think the root of it is start by thinking about what are we trying to learn from this play test? What are our objectives? And then from that, what do we need players to have understood beforehand so for example, they might be skipping a tutorial we need to turn this off in the tutorial. But also what do we need to not reveal to players if we're interested in seeing if they understand how to craft an item we need to be really careful with how we ask the players to do that so that we don't reveal where the item is or how to craft it or what menu crafting is under. What Sir What's a nice generic answer the question, I guess. You want to divulge the minimum possible to avoid revealing information in the player wouldn't naturally have but also making sure you are covering off stuff that you are the players wouldn't know in real life. What would they have gotten a tutorial what would they have seen out in the gain. And as long as you're not trying to learn about the tutorial, that type of thing should be fine.
Juney Dijkstra 51:17
So we have a few questions that I can try to group together a little bit. You've mentioned how finding that a lot of folks that you spoke to report that finding play testers is pretty challenging. But what we will also not necessarily wonder at that point like, is it? Are there pros and cons to asking the same testers to test when you're doing different iterations? Or would it be beneficial to have new players each time, especially in a scenario with, for example, somewhat a game? That's a one time experience?
Steve Bromley 51:54
Yes, and we really try really hard not to want to depend on it to every question. But the nice thing about the panel is you do keep those players if you've signed them up to take part in play testing to be contacted about play tests, you can email them again, and get them to take part in your play tests again, that it depends on what you want to learn from your play tests. So if your objectives are around long term, long term playing and opinions and those type of things, yes, may get those same people back. But there are some things that you just can't learn from bringing the same player back again, if you're interested in do players understand this? Or will they learn it from the tutorial, you can only use that player once because after they've gone through your, your tutorial, even if they had some issues, the next time you show them it, it won't be a genuine, they won't be a genuine player, they will have all that pre information from the last time they played tests. So again, I guess it's reflecting on what do we want to learn from this study? Do we need new players to answer that? Or would it is it okay to have a comeback? And usually, if you're looking at understanding and players understanding of what they're doing, or teaching players how to do things, you will want new players otherwise, sometimes it can be okay to use the same players again, which is very useful when we've got that panel of play testers ready to go
Juney Dijkstra 53:26
And on time, Steve, do you mind running a couple minutes over for a few more questions? Yeah, I'm good too. Okay, great. So then Maxim asks for with regards to GDPR regulations, are there any serious concerns to take in mind when establishing VLANs and for example, collecting and processing data from place?
Steve Bromley 53:45
That's a really good point. And what I quite like about MailChimp and ConvertKit, the ones I mentioned, is they do handle a lot of that for you, because they are mailing lists providers, and they have to deal with GDPR and personal data handling all the things about users be able to remove their consent users understanding why they're on that list, they do a lot of the heavy lifting to make sure that's covered and looked after. I guess it's always useful to be aware of GDPR. I don't know if everyone's aware. GDPR is a European legislation about keeping people's personal data, it's always useful to be aware of those things, because there are risks of keeping data that you don't need, or you're not going to do anything with. But the nice thing about using an established mailing list provider is they've also thought about this and handle it for you. compared to if you just kept your own Excel spreadsheet of playtesters you're much more likely to fall foul of We Are we keeping the data in that inappropriate way or or keeping inappropriate information, or we're not allowing people to revoke access to their data? Yeah, so meaning this help with that.
Juney Dijkstra 54:56
Great. Radu, from architect labs asks, How do you make sure that the NDAs are enforced and respected when going for a play test?
Steve Bromley 55:09
Oh, that's a great question. And not to make it too much of an advert on gamed user research.com. There's a bit that says free lessons. And just this week, we did one about preventing leaks and keeping games secret. So there's a couple of aspects of this. So obviously, starting with getting people to sign an NDA, and also if you aren't doing live play tests, reiterating that with them, making sure you talk through, this is what it means. And this is why it's important. And remembering at the end of the session to reiterate that and say again, you signed the NDA remember that? What are some other techniques to think about? There's a link in that free free lesson to a talk by Bob Tilford, who's now at Rockstar Games. He talks about the value of building rapport. And being friendly with your with your play testers helps prevent them from breaking NDAs or doing secret things. Also has a whole bunch of code, things you can do, you can start to watermark your builds, you can start to do things like not sending a bill directly streaming it through something like Parsec to reduce their access to builds, or bringing people to play test live with you and taking away their phones, so they can't take pictures. Ultimately, there is admitted leaks may happen. And that is a risk that as user research we need to be very aware of. Because of that, we need to be careful about picking the right methods, the most secure methods and doing things like making sure NDAs are well understood, and our players understand why it's important to us to enforce them. If you're interested in more information, that email lesson, the free lessons on keeping game secret is on the games user research.com website and talks more about this topic.
Juney Dijkstra 57:02
And if I can add a little bit to that, from personal experience, what I've also found was a lot of late testers are not aware that leaking can actually damage the rest of your game development process and your sales and your continuity for your business. And often late testers are very passionate and eager to engage to talk about your product because they care about you as a developer, because they care about your game. And that means that if they don't realise that it can hurt you, that's also something to that helps pointing out to them. Because in that way, you make it also clear why that NDA is important. And instead, you could also channel all of that passion and energy that they feel about talking about your game into, for example, a private Discord server where you have gathered your playtesters for or, for example, doing something with that mailing list to make sure that they have an outlet for where to put that energy bashing. I don't know. Maybe Maybe you totally disagree with that. But that's what I've done personally.
Steve Bromley 58:05
I think that's fantastic point and really good experience. Thank you for sharing.
Juney Dijkstra 58:10
Then NAT asks, How do you feel about using the how mightly framework for ideating solutions once you identify issues?
Steve Bromley 58:18
I'm a huge fan of how might we use it They can I guess, those type of ideation sessions need to be actively facilitated. And for a lot of people who are further away from design disciplines, they might not immediately be familiar with how might we use or see how they should be used. And so you'll need to introduce it and give some examples. But it is my preferred way of prompting, just for the the wider group, not everyone might be familiar. One of the ways that you might address fixing problems in the game is setting a team a prompt, how might we help players realise where they're meant to go? That kind of thinking helps encourage recognising. There's not just one potential solution, there's 100 different ways you could address that issue and working through a process to reflect and refine those ideas. I think he's really valuable. So yeah, big fan of how much he's
Juney Dijkstra 59:23
Yeah, I wanted to sneak in a question of my own because we are back from discord. And something that I've struggled with a little bit myself is that I've, I've occasionally not really felt like, I wasn't sure how much time it was worth spending some total on any given play test. So when did the return on investment start? Start diminishing, both in time and money? Like you're investing a lot of people hours into this, and that's not cheap. That's that's hours that they can't spend developing. So any advice on assessing how much time you should be spending? I think that's a good point.
Steve Bromley 1:00:00
So at the beginning of the talk, we talked about ranking the risk of your hypotheses. And so I guess that allows you to focus on to identify, yes, it's, this is really important to the success of the game to record a lot of time versus here's a whole bunch of low priority hypotheses. It's not that important. If they don't work, we shouldn't give that much time to it. Here, yeah, I don't have a definitive answer to this. But I think that process of ranking, both before we you start, how important is it, we play test this thing. And then you could do a similar thing, when you've got the results at the other side, you could look at the severity of the issues, and then decide, okay, there's a whole bunch of critical issues here. And we need to put a bunch of attention into this, versus we've only got some low priority issues. These are nice to have. But when rat stacked up against all the other production tasks we need to do. This isn't the biggest deal. I guess it is go through that process of ranking and prioritising prioritising that this community will will do so well.
Juney Dijkstra 1:01:15
Rulon asks, How do you find the process differs across platforms? So for example, desktop versus mobile? That's a very broad question, but just some highlights need.
Steve Bromley 1:01:25
Yeah, I guess the main difference on different platforms is the methods we have available to us. Maybe that's Yeah, another difference is the type of research objectives. If you're working in a mobile game, the context of play where people are playing is much more important than it is on console and console, you can assume someone's sat from their TV, and it's got their full attention. Mobile games, I'd want to understand more about where do they play? And how does that fit into their life? And how long does they have to play. But the real implication, I guess, is on the method. Some of the methods we talked about, like live observation is really easy to do. Live, if you bring them to you, for a lot of the these platforms really difficult to do remotely, it's very hard to get someone to do a, it's not impossible. Parsec exists. But getting someone to play test a console game from their home is a technical challenge. Beyond beyond what we've talked about today. Similar for mobile, there's some technical implications on how do we get the build to them? How do we make sure that they're recording on their mobile and can record what happens to I think it does influence your method choice. But the ultimate the process we talked about of until your objectives, decide the right method based on the constraints you have gather data and deal with data is the same regardless of your platform or or what you're testing.
Juney Dijkstra 1:02:57
Would you say there are similar kinds of differences when testing different genres? Are there highlights there too?
Steve Bromley 1:03:03
That's a good question too. I think the different genre is often about the what teams want to know from their game. The most significant difference, I guess I see is if you're working on a mobile game, especially a free to play mobile game, you're tremendously interested in retention and keeping people for 30 days, for example, that means the type of study you want to learn and how you want to approach that is completely different from other genres where, again, thinking about a box to come to gain, the type of thing that people want to know, we're usually two people understand where they're meant to go down somewhat. They're meant to do down some mechanics in the game. So I think it influences your research objectives and what you hope to learn from those play test.
Juney Dijkstra 1:03:55
Yeah, it's an interesting topic, though. And then another question. Is it okay to do an AB play test? Or would that just be confusing?
Steve Bromley 1:04:02
Sorry, could you repeat the question? What type of playtest?
Juney Dijkstra 1:04:06
An AB test? So play testing, but then with multiple, multiple different variables?
Steve Bromley 1:04:12
Yeah, fantastic. Yeah, that wasn't one of the methods I talked about. But it definitely is a play test method that exists. So half the players see one version half the players see the other version. A couple of thoughts on that, I guess it's very relevant for mobile games, in particular, because they're closest to traditional tech development. Some, it does often produce challenges, because often with an A B test, what you're trying to learn is measuring things you want to measure. Do players spend longer in the game on this version versus that version? Or do players rate the game higher on this version versus versus the other version? And that's a quantitative question, you need to see a lot of players before you can get statistically significant answers to that. But again, sometimes that's that's the right method for the type of objectives you have. A one challenge, I think, again, with AV testing, is when you're when you're running them, you've pre thought about two different solutions. And you think, okay, it's either going to be this one, or that one. And sometimes that can be a bit limiting, because it is the method where you don't see players play for real, you're not asking players questions, you sometimes lack the ability to get outside inspiration from okay, we thought that the problem was between these two, which two versions, but actually, when we listen to players, they're talking about something completely different. And our major issues are somewhere completely different. So I think it can lead to focus, too much focus. But again, combined with other places methods to mitigate that is a really important tool to have in your toolkit.
Juney Dijkstra 1:06:00
This, this question was a little bit longer, but a few of these questions were already answered earlier in the talk. So I would recommend the person who asked them listen back to the start, but there was one particular question in between, that I'm also super interested in myself is, do you feel that cultural differences play a role in the approach that playtesters take?
Steve Bromley 1:06:24
I think that's really important topic. And definitely, yes, this. So an example of this is during my time at PlayStation, our team was based in London, we test primarily of European players. There were equivalent teams in San Francisco and equivalent teams in Japan. I don't know, a huge amount about Japanese culture. But the approach is that their team their research team had to take were significantly different from our approaches. I have, I've heard again, I don't know how true is that it's culturally a lot more difficult to give direct negative feedback, for example, in Japan, than it would be in Europe or in America. And because of that, some of the methods we would use like asking people, what's the worst thing about the game just wouldn't be productive. So yes, I think cultural differences do exist with players and also with players willingness to give feedback or how they approach feedback. Again, that's an probably important thing to recruit on. If your game. Yeah, if you if it's relevant to a variety of cultural backgrounds, making sure you're recruiting players from those background, and then handling that feedback in an appropriate way for that culture is probably an important part of that process. That was a long answer, but I think, yes, the answer is yes, cultural differences do exist, and we need to be aware of them when designing and running play tests.
Juney Dijkstra 1:07:58
I honestly wouldn't expect otherwise. I mean, it matters so much within game development. That would be strange. If it didn't matter, and play tests that we run. We're close to wrapping up. So I'm going to be a little picky with the questions that I'm doing last. Is it okay to automate slash generate logs during play tests to find more information about the process? that players follow or do you feel that's too specific?
Steve Bromley 1:08:24
Oh, no, I think that's great. And people will probably seen this talks from Naughty Dog about how they've approached that or took some bungee about how they've approached that. Again, the important thing to do is start with what do I want to learn from this play test? Sometimes that type of analytics or telemetry, where the game is measuring player behaviour, is the right way to answer that. But we need to start by reflecting on what's our hypothesis? What data do we need to see if it's working or not working? And then use that to decide should we be observing players? Should we be sending them a survey? Should the game be tracking their behaviour? And then make that decision based on on their hypothesis? But yes, I think it's a really important method. There's a good book or not remember the name of the book. Now, there's a book that came out last year by Oxford University Press. I think it's called game data and analytics, something like that, which is a great resource for learning, learning more if it's a thing people are interested in.
Juney Dijkstra 1:09:27
I think I was muted while I was typing. That was not the intention. All right. So wrapping this up with a final question. You've Of course, this is your Twitter and your email address. But a question came in if you have a LinkedIn account, and if so if if you would be okay with being reached out to me via LinkedIn as well?
Steve Bromley 1:09:46
Oh, yes, of course. Yeah. Any way that you can find me do connect. I'm on LinkedIn. I can't recall what my LinkedIn ID is, but it's the same picture as on this call. Follow me on Twitter. send me emails with questions about playtesting. I'll talk about playtesting. Endlessly. I send out every month that email playtesting lesson sign up for the emails. playtesting lessons. Yeah, do keep in touch. I love talking about playtesting.
Juney Dijkstra 1:10:20
All right. And with that, we are wrapping up. Thank you so much, Steve. That was incredibly insightful. I would of course, also like to thank our supporters, and again, our event producer Angelo. Without all of you this wouldn't have been made possible. And as mentioned before, this was recorded and we're going to publish the recording on the game production podcast as soon as we're able, and will of course, include a link to the slides. Thank you again, everyone.
Steve Bromley 1:10:45
Thank you very much.