Placeholder Image

Subtitles section Play video

  • If it's like my family, I definitely have no subjects.

  • Hey Daniel, hey Eric, welcome back here.

  • Thanks so much.

  • Yeah, it's great to be here.

  • Hi Virginia.

  • Hello.

  • All right, let's kick off.

  • I wanted to start out with some reminders.

  • First, we have a book club coming up on Inspired in four weeks on August 7th.

  • I just re-read it myself.

  • It's a good read.

  • It's highly aligned with how I think about product management and does a good job of explaining why some of these things are important that I also believe to be important.

  • So it's nice to have another voice explaining all of that.

  • So please do read that.

  • I think I'm going to update the new hire onboarding doc and ask all new hires to read this as well so that everybody in the team is on the same page with respect to this book.

  • Let's see.

  • Reminder B, remember there's this interview spreadsheet, CS and sales have populated that with a number of customer contacts for meetings.

  • Please do follow up on that.

  • I want to ensure goodwill with that team and follow up promptly with meetings with these customers so that that team can see that we're taking advantage of it.

  • Third reminder, we've got a little engagement survey.

  • I'm going to run this once a month in Q3, just to take a pulse given all the change going on.

  • Please do take a minute to fill it out.

  • It's five quick questions and then one free form where you can share whatever feedback you have.

  • Fabian didn't receive it.

  • I'm pretty sure I went through my emails.

  • Maybe it's on my end, but I'm happy to fill it out.

  • You didn't get it.

  • I need to get it to you.

  • All right.

  • I will.

  • I'll ask Jessica to resend that to you.

  • Anybody else in the same condition where you did not receive it?

  • I don't recall, but is there a way to put the link to the survey in the agenda?

  • Well, it's personal.

  • It's tied back to your user ID so we can track which team you're on and that kind of thing.

  • I do believe it's anonymous, but nevertheless, everyone has their own custom ID.

  • So I'll ask Jessica to send it to Fabian and Karina.

  • Anybody else?

  • I hadn't seen it, Scott, but I searched my email real quick and it looks like that's the title of the email.

  • So if you just search for that in your Gmail, you should be able to find it if you got it.

  • Pulse survey.

  • Okay.

  • If anybody else didn't get it, please ping me.

  • All right.

  • Next reminder, we have a goal of at least three customer interviews per PM.

  • There's an OKR issue out there.

  • If you haven't updated it lately, please do so.

  • And remember, we have three weeks until Q2 to hit our goal.

  • So please do invest the time to get those set up and get at least three done if you haven't already.

  • Next one, category maturity page.

  • Last week, we talked about this.

  • Josh did a great job of creating some new views, one of which is sort of this flow chart showing how mature we're going to be at a given point in time, which raised questions about whether we were forecasting that accurately.

  • If you haven't already, please go in and either confirm that it's accurate or update it.

  • Thanks to Kenny for creating that issue.

  • Somebody had a direction maturity page.

  • You want to talk about that?

  • That was just me, just as you referenced it, so I was adding the link there.

  • That's all.

  • Folks, if you haven't seen the updates there for the charts, just check it out.

  • So it's a good way you can get a sense for like, you know, it's hard when it's in tabular form, but when it's charted, it's much easier to see like if it's achievable or not based on some of the trends.

  • And there's also, if you scroll down, stage level trends as well.

  • So you can see how your stage in particular is trending or set to be trending.

  • Great.

  • Thank you, Josh.

  • All right.

  • Some team updates.

  • We hired a couple more PMs.

  • We got a good rhythm going on hiring.

  • We hired Gabe Weaver.

  • He originally came through the growth funnel, but we have a really strong candidate for that fourth slot.

  • So we're going to target Gabe for a third managed PM.

  • The charter of that team is to be defined, but bottom line, we're going to have a third group in the managed area and Gabe will lead that.

  • And then Dov Hershkowitz, we just hired him as the APM monitoring.

  • He's got a great background in monitoring and has most recently been at Elastic.

  • So thank you to everyone who's been involved in the hiring loop.

  • I know it's taking a lot of energy from everybody, but I think our hiring processes continues to pick up speed.

  • B2B worked with Christy and David Sakamoto to change some language around customer results.

  • Just wanted to make sure you all saw that.

  • So there's the MR. Hey, Scott.

  • On that one, the diff highlights what is new content, I believe, and there's one section that is great.

  • I can totally understand why we would add that about prioritize ruthlessly.

  • But then the rest is, I guess, a bunch of formatting changes.

  • And I don't know if there's new content in any of the dogfooding.

  • I guess the TLDR, the addition of that prioritize ruthlessly, or is there some other point we were trying to make in this change?

  • Oh, it's been a little while.

  • I think there were a number of changes, but before the handbook basically read that internal feedback is worth 10 times more than external feedback.

  • And I understand why we want internal feedback because of dogfooding and using our own product.

  • It's a great channel for feedback, but I think it was sending the message that customers weren't nearly as important as internal opinion.

  • And both Christy and I want to move off of that position.

  • Like we should be customer first and treat our own teams as a customer.

  • But I don't want people to interpret that our own internal opinion is worth 10 times more than a customer's opinion, if that makes sense.

  • So it was mostly language, wherever that showed up in the handbook.

  • Gotcha.

  • Okay.

  • The one comment that I had on this is that some of the text seems like we should focus on core competencies as opposed to new scope, as in we should focus first on what we're best at.

  • So anyways, that's one thought I had on this.

  • I don't remember that being the point of it.

  • Maybe it reads that way.

  • I don't know.

  • Feel free to continue to suggest tweaks.

  • The point was, let's prioritize and do what matters most first.

  • Just it's kind of what I've been preaching the whole time.

  • Like let's, in your area, wherever that is, do what matters first.

  • Don't try to do it all at once.

  • We're going to have to work our way through.

  • That was the point.

  • Yeah.

  • And I don't know if this is a follow-up issue in the way you described it.

  • It doesn't seem controversial, but I will say there was a big discussion and a recent initiative from Sid and other leaders that we should heavily prioritize dogfooding because there are parts, there are teams within the company that were not utilizing our features.

  • And we wanted to make sure that the product team was responsive to requests from them.

  • It's a little bit different than saying it's about our internal opinion.

  • Like we had always said we should validate it.

  • So that clarification is good that we want to make sure it's about us saying this is in line with where we want to take the product and where we're hearing customers.

  • But if an internal customer wants it, we should, the original thinking was that we should emphasize it.

  • I just want to like, if the intent was to make sure we were just clarifying that same position, but if we're saying actually we should kind of pull back from the push for more dogfooding.

  • No, I would, please don't conflate the two.

  • We very much still want to dogfood.

  • I think the point is when you're thinking of customers for your thing, think of our internal teams early, like you can get great feedback from them.

  • They have an incentive to work with you.

  • There's very little risk in rolling out things early to them.

  • So treat them like a customer and think of our internal teams early as you're rolling something out.

  • That's still very much the message.

  • But let's not over rotate on internal feedback or internal opinion.

  • Let's still seek external feedback too, because that's just one customer of many. Great.

  • Make sense?

  • Yeah, it does.

  • All right. 2C, customer discovery training coming soon.

  • Sarah O'Donnell and her team are going to do a bunch of sort of quick videos on a variety of customer discovery topics.

  • So super excited for that.

  • They should start dropping any day now, I think starting this week.

  • And so we'll release those to you as they come out.

  • We'll embed them in the how we work description on our team page as well.

  • All right.

  • Number three, 12.2, kickoff feedback.

  • Josh, thanks for leading the charge.

  • I thought you did a good job of emceeing and sort of adding color commentary in between.

  • I thought the screenshots definitely helped.

  • There were a bunch that did not have them.

  • I was wondering why.

  • Is it just because we're not there yet on many of these?

  • Yeah? Yeah.

  • I mean, some of the commentary, I don't know, or Nicole added that, yeah, many of the issues were saying we're going to do UX front end and back end in the same iteration.

  • So it hasn't started.

  • And in some, they're like, I can think of a number where there just aren't appropriate screenshots, or at least there weren't screenshots or mock-ups created in advance for the purposes of front end working on it, because front end was going to work on it without a mock-up.

  • Okay.

  • I'd love to get to where we're a bit ahead so that we'll have more of these earlier and hopefully the customer discovery flow will get us further ahead on that.

  • In my case, some of the features also just have no UX component, as in no UI component that could be screen-shotted.

  • Understood.

  • Yeah.

  • I don't expect everyone.

  • I mean, use your judgment.

  • If it doesn't need it, fine.

  • But where we do need design, it'd be great to get at least a month ahead so as we roll into dev, we have that to offer them.

  • Scott, just a quick question to you.

  • How do you feel about presenting like balsamic or super lo-fi mock-ups on the kickoff call?

  • Fine with that.

  • Okay.

  • Because that could be an option, too, for PMs that are waiting for UX to work in the same sprint.

  • And I know that plan's done a pretty good job, at least in the past, of kind of running ahead of UX and saying, like, hey, this is kind of what I think I want this to look like before spinning UX cycles on making a more hi-fi mock-up.

  • So just to- If you think it does a better job of describing it than the issue itself, then use it.

  • I think in some cases, like a picture can be worth a thousand words.

  • I mean, no matter how many words you throw at something, it's like, you know, for example, one of my things that I request or I reported on for the release, the kickoff meeting, was expanding the Epic view in the roadmap.

  • And like, those are basically just a bunch of buzzwords put together that you're like, okay, what does that mean?

  • Expand Epic.

  • And I'm just- I literally thought on that one for like 20 minutes saying, how do I make this issue title like more descriptive for customer value?

  • And it just came down to like, that is the functionality we're adding.

  • What does that mean?

  • Oh, here's the screenshot.

  • You can see that we're going to add a dropdown.

  • You can see the issues and children epics that are attached to that epic.

  • And in that case, I was like, I'm so thankful I have a screenshot, even though that one is actually not a hi-fi mock-up.

  • It's more lo-fi.

  • It was a little bit pieced together.

  • So yeah, I think like in general, there's a lot more value if we can show something like that.

  • So product managers, you can consider that.

  • You should feel free that, you know, you're empowered to take a tool that you're comfortable with.

  • Even if it might even be just like Google Slides and make something that gets you at least a part of the way there in terms of what you want the experience to look like.

  • Yep.

  • Perfect. 3C, I thought the talk track shifted.

  • It was definitely more problem focused.

  • I noticed a number of speakers really trying to zero in on that, which is perfect.

  • Some of them could have been more problem focused, I thought.

  • So just keep considering that as you, you know, it's important to be able to pitch these things in ways that people that aren't close to it can understand.

  • And so just think about that.

  • How do I explain this to someone who's cold, who doesn't know a darn thing about this?

  • Why should they care?

  • Getting that crystal in your thinking is going to be important no matter what.

  • So it's time well spent.

  • Hey, Scott, this is Karina, just to add to that, if you don't mind.

  • I think this has always been a challenge in product, even before I've joined GitLab, for many people is how to get there on some of this terminology when those of us have deep technical background.

  • So my thought would be, is there a way that you can start sharing, you know, or applauding good examples of this so that the product team can start to kind of ruminate on this and develop that skill if we're not there yet?

  • Yeah, I thought Luca's were very well framed up.

  • Those those two popped out at me as, yeah, that's the problem we're trying to solve.

  • Check those out.

  • I'll look through for some other examples.

  • Thank you for the suggestion.

  • All right. 3D, we went long.

  • We just have a ton of speakers, which I love that lots of people get a chance to speak.

  • So I'm good with that.

  • But we're going to have to we're going to have to limit the number of items, probably.

  • So it looks like there's some other ideas in here, perhaps.

  • Themes.

  • Yeah, I mean, if there are some that relate to each other, you could tell a story, hey, we're trying to improve this and then A, B and C tie back to it.

  • I think it's OK to be pretty brief in your description as long as you're hitting what it is.

  • And if somebody is really interested, they can dive deep.

  • Thematic is a good idea.

  • Recorded video, if you really want to go deep, maybe it's technically complex.

  • That's a great idea.

  • And then you can just cover the customer value at a high level and leave the detail to the video.

  • Watch statistics.

  • I think Josh looked this up last time.

  • I think he said there were a thousand.

  • Oh, there we go.

  • Kenny's putting them in.

  • So somewhere between 500 and a thousand.

  • To kind of add to the time, just a feedback, I was timing myself this time and I had two features listed and I hit three minutes and 14 seconds.

  • Obviously, I can shorten that.

  • So when we talk about, you know, I think somebody mentioned doing two or coupling it down.

  • It's interesting that I landed there with the two that I chose.

  • Yeah, that feels good.

  • I mean, I don't know if that feels about average, but we've had how many speakers will probably have to be a couple minutes, Max, per person.

  • I mean, Eric pointed this out in the next line.

  • I do think we are due for a rethink of how we're doing the kickoff, because we're going to have next month, we're going to have 25 people trying to give content.

  • And even at two minutes, you're already gone.

  • So, yeah, maybe we expand it.

  • I will give a shout out for Jason.

  • I know because he's on paternity leave, created a video, but I think the original intent of the kickoff was actually just as a company.

  • We had a retrospective and a kickoff, a retrospective immediately followed by a kickoff.

  • And we just decided to post that on YouTube.

  • We now post a whole bunch of content on YouTube.

  • So, just having what you would normally do for your grooming or kickoff within your individual group posted to YouTube and us maybe having a specific channel for people who wanted to follow it.

  • Anyway, we should discuss it in an issue and come up with something, I do think, prior to next release kickoff.

  • Just to evaluate alternatives to the format?

  • Yeah, I mean, I don't, even if we said every person has one minute, I feel like we're doing a disservice because we're now highlighting much less because we feel like we have a time constraint and need to keep it into one synchronous 30 minute block when there's a need to do that.

  • Okay.

  • Yeah, plus one to revamping it.

  • I think, I think we're trying to, it's like, got so many jobs right now that we're not doing a good job at any particular one of them.

  • But I think that feels the most important customers are internal.

  • And it's just like communicating internally about because like, people attend that thing, man.

  • We had like 50 people on the zoom call alone, not even considering YouTube, people were asking about what happened to YouTube link and things like that.

  • So it's, it's well attended internally, I think, for alignment.

  • Let alone the, you know, marketing value of like, sort of like a release.

  • I mean, for fictional customers, it kind of feels like you're better off having like a webinar or live stream on the release day.

  • Right.

  • Yeah, maybe the externally focused one would be more about what we just shipped.

  • There was a webinar that used to happen called release radar.

  • I think I participated in a couple of those three or three of them back to back, and they were pretty poorly attended from what my experience was.

  • And I think they actually got ended by the product marketing team for that reason.

  • I'm sure someone from that team could actually give feedback.

  • But I think one thing about the time limit is it's really hard to motivate problems, particularly like in a short amount of time, particularly when they're very technical, like as product categories grow in maturity and sophistication, like the problems become more and more specific that we're solving.

  • And so motivating those specific reasons and why we're going after like this specific tiny piece of a very mature category.

  • It's hard to do in 30 seconds in a way that and so if we want to do that better, that's going to put more and more pressure on like communicating a reasonable number of items, I think.

  • Okay.

  • Thank you all for the feedback.

  • I like the idea of creating an issue and perhaps tweaking the format before next month.

  • I also like the idea of asking internal and external constituents what they like or don't like about the format.

  • Yeah, just one final thought on that.

  • Like, I love that it's a half an hour.

  • I'd almost even like take pick particular categories over, over, over lengthening the time as an example, just because I feel the feeling I had a feeling that if you want to watch it consistently, it's going to be in that block, but that's just me.

  • So if, you know, other customers are, you know, saying they would like the larger block, then that's the right way to go.

  • So that's, that's where I'd love to get feedback in some fashion to say, okay, you know, here's how we should change it.

  • But we clearly have gone breath wise.

  • We've gone so much broader that it's going to be hard to cover all the topics in it.

  • Okay.

  • Thanks, Kenny, for starting the issue.

  • James over to you for number four.

  • Yeah, I just thought I'd share this for many, I think many on this call haven't heard Mark Cunspat speak about product discovery sprints, but he advocated for this quite a number of times previously, from his experience running these at a prior company.

  • So the idea is kind of different to a, I guess, a UX discovery sprint, I think Fabian linked one of the books about that, where it's really focused on UX iteration and research.

  • The product discovery sprint is more focused on kind of like actually building something iterating on something that's built and trying to get to some sort of MVC really quickly by trying to make the process more synchronous.

  • So the source code group is going to try and do that around file by file diff navigation to solve performance and usability problems in 12.3 And I thought it'd be interesting to share that because internally, we've been wrestling with like how to make this work well in async slash remote environment.

  • So we're looking at trying to confine the participants in a specific time zone, so that we can all be available with a significant amount of overlap.

  • But that's also difficult because we have, it kind of excludes automatically 50% of the team who are just geographically remote from any of their peers, we only have one UX designer that's only available in the European time zone.

  • So some interesting challenges there.

  • If it goes well, we're going to try and replicate it A release or two later on a different problem that is also really complicated and hard.

  • We're going to make progress on quickly, but I'll share any findings we have.

  • And if anyone's interested in discussing that with me more Put a meeting my calendar or drop me a message.

  • This is great, James.

  • By the way, I think the UX team is going to run.

  • Well, let me just say we have the option to run one with Google Ventures, who's one of our investors in that Sprint book that Fabian linked to was written by a guy from GV.

  • They did hundreds of these things for our clients.

  • They know what they're doing.

  • So if we get a chance to do one with them, we should We're going to have to figure out how to do it within our basic model, though.

  • So whatever you learn from yours, James, please feed that back super interesting topic.

  • I think if we could get good at this asynchronously, that would be a breakthrough.

  • Yeah, I think one other interesting challenges that the sprint sort of terminology is kind of Challenging and like it's not sustainable to be doing design sprints or discovery sprints on a daily basis, whether or not we were in person or not.

  • It's not scalable to actually sprint all the time.

  • So choosing the right tasks, choosing the right time is I think one of the other challenges.

  • I agree.

  • Yeah, you don't want to do this for everything because, well, if you follow the to the letter.

  • It takes a whole week and you're totally dedicated to it, which is Amazing for focus sake, but you can't get anything else done.

  • So depending on how we structure this, it would need to be done for things that are Really big unknowns were dedicating a big chunk of time like that is worth it and not everything clears that bar.

  • And I think it's also most relevant for for stages that are very In various in the very beginning, kind of like a tyrant was like one of their biggest example for Google Ventures, when They obviously like solving clinical trials for the world is like super complex problems.

  • So they just figured out what is this thing that we can do so that we can start getting there.

  • And I think these are the problems that That is a Sprint is We used it pretty successfully at my last company around pricing and packaging stuff and ran a bunch of interviews with customers on that so I've seen it work.

  • All right.

  • Okay, Christopher.

  • Number five.

  • Yeah, just want to call out we've over the past month we've had a significant number of And that affected at least one customer revenue potential.

  • And because of that, you know, we've had some some focus from an exec leadership perspective.

  • So I encourage everybody to look at that document, you kind of look through it, in particular, there's a couple things from an engineering perspective, make you aware of one is is we started an infrastructure.

  • To development board where we're going to start matching issues up and trying to make sure that if that those get prioritized highly where appropriate, particularly for anything that you know affects performance around these issues.

  • The other issue that I put in there was one around that's listed specifically, which is around the fact of prioritizing P performance availability work.

  • So one of the significant Features of this particular recent outage last week was is that the Redis server apparently Can't handle the load anymore.

  • And we started digging into it.

  • We found a bunch of stuff that we hadn't checked.

  • Like, for instance, It's an example RJ unit tests were basically going and getting cached and there was no limit on the number of unit tests that could actually be cash.

  • So we're getting these like blocks of like several megabytes of data that had to basically be transferred around and read us That's really what's affecting its performance overall from a caching service perspective.

  • So consequently, Scott, I send that to you.

  • I hope that's okay.

  • It feels like it was like you need to help out in regards to the fact that, you know, how do we best make sure that we get this this kind of systematically going And I just want to make sure that everybody was aware and just kind of open up for discussion.

  • If there were any questions or or any feedback early feedback on it from that perspective.

  • I added some comments to it, Christopher.

  • Okay, I haven't had a chance to look.

  • I apologize about that.

  • Can I ask, do we and maybe Mac.

  • This is a question for you.

  • Do we categorize performance issues as bugs.

  • We do have a performance label, but they should be under under books.

  • Okay.

  • This is an example where oftentimes the way we would treat performances is a reactionary.

  • This is trying to think about it more in a proactive way.

  • So like as an example.

  • I'll give a horrible example, but I worked at Amazon tags.

  • Originally, when Amazon was created tags were Were they were expecting just a label, you know, certain instances.

  • And that was it.

  • And it turns out that all customers started using like 20 and 30 or 50 tags and they're like, what the heck's going on and they realize Tags were being used to basically assure environmental information.

  • So the VMs could They could put the same drop of code on two different VMs and they could behave differently based on the tag.

  • Which was a total novel way for customers to use it.

  • So then they had to basically limit the number of tags They could use because it wasn't scaling with the system effectively so like this is kind of another example where like I think we got to start thinking in terms of You know, like when we create something new a new feature of piece of functionality Like what's the cost associated with that?

  • Right?

  • Because like it does cost some something internal and I'm not asking Product managers to necessarily think in terms of the exact bytes But I am starting to think in terms of like, you know What are the expectations around because like as an example if we went back and looked at G unit tests and reporting, you know, if we said unlimited that's you know, that's a tough engineering call, right?

  • Particularly, I guess it's free right now For customers is my understanding.

  • We also don't have a number of repos mirroring.

  • We don't have a limit on that and that that seems dangerous Yeah, so I guess I would comment, you know, I think the product team Is expected to prioritize all things and to understand them deeply whether they're a security issue or a performance concern I think what you're highlighting is In order to be proactive.

  • I don't know if the product team would immediately know the impact of a proposed change, but maybe that's an opportunity for our Calm infrastructure or sre stable counterparts to be involved in vetting and looking at issues early in the pipeline to decide whether or not they would Yeah, or Or let's say we're implementing a feature like let's say we were implementing mirroring from scratch like the first question we should be asking is is like how many How many mirrors does a customer expected to be able to support and what I want to start charging for?

  • if they get above a certain limit and you know, and right now we don't and you could argue that Scaling is just as much a reason for customers to start paying us as uh as feature sets That's that's that's kind of the argument I would be making Because those things cost money like whether we like to admit it or not Yeah, christopher, I would I would agree with you on what you're trying to sort of shape up and call out here um in the sense of you know going through pages, for example, um performance of getting those pages loaded, um Is not great and I don't know if we set out originally to track some of those performance things, um but I think that Performance and to your point kenny.

  • I think performance should be somewhere incorporated um As we move forward and something we should be thinking about for scalability across the board um Because uh, it's just as important as bringing forth that really cool thing to them Is that that really cool thing works and people will stay there to use it I I think just as a side note, I think we have Something the product handbook that I read like a couple days ago on performance something like fast applications are are like always, you know, like more usable and I think that's that's definitely important and I also think that Um github.com is massive.

  • I think we have four million users and for example for geo.

  • I know that Only by actually like interacting with the infrastructure We are getting feedback on some of the performance bottlenecks that we are just not seeing otherwise, right?

  • And so I think that's actually also um, really valuable and in that regard maybe also, uh Like again, you know dogfooding these things helps and I think with the combination of cd we may hit a lot of those things Yeah, and the dogfooding thing on that front is a little confusing to me.

  • I met with um, um Maren to talk about that and you know, there's sort of this mentality of looking at dot-com first or leading with dot-com for scalability and I just It's not really chris to me where we're going um from making sure that we're um, you know how we approach making sure that we intact scalability for dot-com if we're starting the dot-com or restarting somewhere else, um, From a dogfooding perspective i'm pretty sure the handbook says that we're meant to well at least the guidelines used to be that for new features that were meant to be available on gitlab.com and self-posted at the same time and that There used to be a production ready checklist um That I think the engineering team was responsible for I know that for when he launched geo There was a production readiness process that we had to go through and certainly with giddily We can see these things on the source code front We're regularly considering scale like moving terabytes of data from the database into object storage and considering all these sorts of things performance is very much a feature and should be considered that and I think Particularly in categories where adoption is still growing and in early stages of maturity performance Like understandably is less of a concern because there's lower usage.

  • So Like solving Scale at like an enormous level doesn't make sense commercially like necessarily when usage is small so there is a bit of a juggling act here because we don't want to build a product for billions of users if there's only I don't know 20 000 users experimenting with our newest feature.

  • Um, so there's a There's an iterative approach that needs to be taken but I'd I would agree that particularly coming from a team That's digging out a lot of technical debt and solving a lot of performance problems all the time We've probably historically not been very good at picking the right moment To pay off technical debt and address performance problems until they become fires so yeah, so To that point just real quick james.

  • Um, sorry scott.

  • Um, I think some things are obvious like when we look at our progressive deliveries, um strategy I think that we like if you look at something like feature flags or something like that That's something that I think is going to be like I wouldn't imagine that that's not going to be a key feature that we're going to bring forward Um, so I feel like that that should be a gimme on whether adoption has yet struck or not Um, but the second thing that is not clear to me like again When I was interviewing maren about dog fooding is that I noticed that maren's like we don't this isn't We weren't they didn't come to us first.

  • And so this is not scalable or this is not usable for us internally And so it's like the the approach and process moving forward to dog food in the right spots is not clear to me Or you know what the best practice has been or if anybody's you know, crack that yeah I can give a concrete example because I did a call with maren a few months back around confidential merge requests Um, so we knew that customers wanted to resolve them We knew that we wanted to do that and that we're trying to get rid of dev.gitlab.org so I had a video call with him and a bunch of async conversations with like I've got these ideas for the first iteration looks like and then we did a few calls and worked through them and worked out Which were the things that needed to happen?

  • And so we're shipping the first iteration of that in 12.1 But we coordinated with them and spoke I spoke with maren quite a lot to make sure whatever we were building was useful And would solve the security problems that they had as well as our own ones, so I yeah, I agree.

  • It needs to be proactive We're not going to ship something's useful or that the infrastructure team is going to want to opt into Unless we've had a conversation with them in advance All right 30 sec.

  • Can I add one like last tiny point?

  • It's sometimes really important for customers as well that we're running it on gitlab.com Um before they adopt it.

  • So one example is we built ssl tls support in giddily.

  • Um But it's not turned on in gitlab.com And so the customer that we built it for isn't using it because they're waiting for our production team to turn it on because they want to see Before they turn it on for their enormous instance Have we actually proven it at the world's largest gitlab instance scale, um, so I think that's one important reason why we always need to make sure that features are on and are getting used on gitlab.com Just again rip.

  • Sorry.

  • I got a couple things.

  • I think we definitely need to To have a stronger definition of done as part of progressions delivery, right and so Part of commission done is it needs to run at scale and get that dot com successfully And not blow up the cost model not below performance.

  • Um, and if it does it's just going to be reverted Frankly, um, and that should be the bar for getting features across the line.

  • Um, that doesn't mean for new features.

  • Um, You know that have low usage that you know, obviously they're impacted quite small But there's still it needs to be within reason.

  • Um, I totally agree that you don't want to overbuild On the first iteration for planning for millions of users because that's just doesn't make any sense Um, but um, yeah, I think that's one aspect I think their aspect is that on your comment Christopher on pricing and we can maybe have a follow-up here on like a handbook update, but um I think it's interesting that customers will absorb the cost on self-managed of of compute and so for them if they want to have a ridiculous number of you know, um, Uh mirrors then you know, then it's fine because they're paying for it's our use case.

  • It's all on their dime Um, and so maybe a way to think about this is to have some level of control so you can set if you want to at the instance level to sort of I don't have have some way to control that.

  • Um in a manner of behavior for when we're covering the cost of those things Um, but uh, but yeah anyways Thank you all great topic Christopher.

  • Uh, Please pile on that issue with thoughts on how to how to handle this.

  • I like your suggestion on definition of done josh All right, karina six and seven Yes.

  • Um, so I um submitted an mr for the product handbook yesterday um, and we're going through this process of uh Getting more self-organized in the release.

  • Um, Area and with our engineering and user design, uh partners and you know, one of the things that we Recognize and it's documented in the issue below in number seven Is um, you know one our delivery percentage is has not been great, which you've heard me talk about.

  • Um, but Team has been on a ramp that we need to self-organize around some method and what we found and sort of the last prioritization for a release scope is that we have a lot of oversized issues and features, um that Uh, you know honestly need a need a beat or a release, um to go through user research Um, maybe look at the code if they've never seen the code, you know reviewed that piece of code before Or make some recommendations on the best way to solve Um, so I put some thinking around, you know That sort of you know that dual track mindset dual track agile kind of launching off of what user experience has recently updated for dual track agile So feedback on that and then the second piece is that this experiment we're running is we're leveraging a semi-dual track agile approach just to organize our conversation how we open issues, uh for areas that we need a discovery beat versus Presenting an issue that is actually ready for delivery.

  • Um, one thing that was interesting scott.

  • Um, we were talking about You know just the um kickoff call and having some you know, you know images and and more to share um That's definitely where I think we'd like to be with release is getting ahead of that curve and really having Um some concrete understanding and prototypes of what we're trying to present and deliver um, but when we looked at sort of kind of going through that process, you know, this is really for complex things or um heavy lifting Because you know it is about a 20 to 30 day lead time, um to commit So just uh, and so we have some targets, um To improve.

  • Um, you know our hypothesis on leveraging this um, you can follow it there if you have input, but Um, it they kind of tie together, but I love input on the handbook piece Thank you.

  • Karina for creating these and sharing these I think you're on the right track Um In parallel i've been working with like christopher and eric and christy to outline A high level description of our software development life cycle, which will have two tracks this is Sort of competing content there or maybe or maybe they could be merged.

  • So thank you for doing this I may slow roll it a little bit to make sure that We have one way of describing the flow we'd like to go through but uh Thank you very much for getting it kicked off Any questions for karina If not josh over to you Yeah, just a risk announcement, I just went through renamed the promise label to planning priority, um General meaning is largely the same.

  • Although we shouldn't be promising features.

  • And so this is just a way to flag it um And uh, that way it's a reminder for pms that this issue had some important select conditional dependencies And so just be aware of it.

  • Um, so you can feel free to use it Um, I did note in the label text that it should only be applied by product managers and in particular the responsible product manager for that section Um, so it shouldn't get applied by cams or anyone else so Awesome I like that terminology a lot better.

  • Thank you.

  • Josh All right, five minutes to spare anything else If not, have a great tuesday adios Thanks

If it's like my family, I definitely have no subjects.

Subtitles and vocabulary

Click the word to look it up Click the word to find further inforamtion about it