Friday, 20 January, 2017 UTC


Summary

Katie Gengler @katiegengler | GitHub | Code All Day
Show Notes:
  • 01:23 - Testing
  • 06:20 - ember-try
  • 14:11 - Add-ons; Ember Observer
  • 17:43 - Scoring and Rating Add-ons
  • 25:25 - Contribution and Funding
  • 27:41 - Code Search
  • 30:59 - Data Visualization
  • 32:27 - Change in the Ember Ecosystem Since Last EmberConf?
  • 34:35 - Code All Day
  • 35:39 - What’s Next?
Resources:
  • ember-qunit
  • liquid-fire
  • capybara
  • Selenium
  • appraisal
  • emberCLI
  • Bower
Transcript:
CHARLES: Hello everybody and welcome to The Frontside Podcast Episode 54. I am your host, Charles Lowell, with me is Alex Ford. Today, we're going to be interviewing Katie Gengler. I remember very distinctly the first time that I met Katie, it was actually at the same dinner, I think that I met Godfrey at EmberConf in 2014. That was just a fantastic conversation that was had around the table and I did not realize how important the people that I was meeting were going to be in my life over the next couple of years.
But Katie has gone on to do things like identify a hole in Ember add-on ecosystem so she created Ember Observer. There's a huge piece missing from being able to test this framework that spans multiple years and multiple versions and being able to make sure that your tests, especially for add-on authors, run against multiple versions so she created and maintains Ember-try.
She's a part of the EmberCLI core team. She's a principal at Code All Day, which is a software consultancy and just an all-around fantastic woman. Thank you, Katie for coming on to the show and talking with us.
KATIE: Thanks for having me.
CHARLES: One of the things I wanted to start out the conversation with is something that's always struck me about you is there's a lot of people when it comes to testing, they talk the talk but you have always struck me as someone who walks the walk. Not just in terms of you make sure that your apps have tests in them, where your add-ons have tests in them but talking to people about testing patterns, making sure that when there are huge pieces of the ecosystem missing like Ember-try. I remember this as something that I struggled with.
I was running up against this problem and all of the sudden, here comes Ember-try and you've been such a huge part of that. I want to know more about kind of your walk with testing and how that permeates so much of what you do because I think it's very important for people to hear that.
KATIE: I got really lucky right out of college. My first job was at a place that where people think of mythical themes, XP-focused developers so the first thing I was told is everything is test first, everything is test-driven. I was primarily doing Ruby in Rails at the time but also JavaScript. At the beginning, we didn't have a way to test JavaScripts and there was a lot of missteps in the way of testing JavaScript until we came right around to QUnit. I was QUnit long before Ember even came along. It's kind of bit ingrained in my whole career.
Michelle as well. Michelle is my partner in Code All Day. We’re both very test focused. I think that's what drew us to start a company together and working together. Every project we're on, we try to write encompassing tests: test drive everything, if we're on it, projects upgrade or any project to fix. We try to write tests as a framework for everything that we're doing so we know whether we're doing something right or not.
When it comes to Ember-try, that wasn't entirely my own idea. That was something that Robert Jackson and Edward Faulkner were looking for something right. I remembered that appraisals gem from Ruby. I really enjoyed being [inaudible] gems that I had written Rails so I wanted it to exists for Ember so I just kind of took a promise of to do it. It was extracted from Liquid Fire. I had some scripts that would sort of test multiple versions but it was rough. It wasn't as easy as it is today.
CHARLES: Yeah, it does speak to a certain philosophy because if you're coming to a problem and it's difficult to test, you often come to a crossroads where you say, "You know what? I have a choice to make here. I can either give up and not write a test or trying and test some subset of it," Or, "I can write the thing that will let me write the test."
It seems like you fall more into that second category. What would you say to people who are either, new to this idea or new in their careers and they butt up against this problem of not knowing when to give up and when to write the thing to write the test.
KATIE: I almost never don't write the test so if you're suspicions are true, I will write something to be able to write the test. But there are times that I'm [inaudible] and sometimes I'm just like, "This is not going to be tested. This is not going to happen." Finding that line is pretty hard but it should be extremely rare. It's not when people come to me, I work with a client and they're telling me, "No, it's too hard to write the tests."
A lot of times, it's not only how you write the test, spreading the test and learning how to write a test. It's the code you're trying to test that could be a problem. If you have a very complicated code, very side-effect driven code, it's very hard to write the easier sell, which might Ember acceptance tests. What you're really kind of on a level of integration because you do have a little bit of knowledge of what's going on and you have to be within the framework of what Ember tests wanted you to do, which is async is all completed by the time you want to have this assertions and test.
That means to look different tools like going back to something like Capybara or Selenium and have some sort of test around on what you're doing in order to replace the code that makes it hard to test it at a lower level to begin with. I think a lot of people are just missing the framework for knowing what to do when their code is intractable or not, necessarily that the testing and the guides that have you tested. I think most people could go through tutorial and do tests for a little to do MVC app perfectly fine. But that's easy when you [inaudible] size of the equation so if you're already struggling with code and you're not quite sure, either in Ember, it can seem very, very hard to write tests for that.
I think that's true with Rails as well. I think people that begin in Rails don't understand what they're going to testing, especially if they have an existing app that trying to add test to but unfortunately, Rails not a long ago, kind of got into everybody's heads that your tests go with what you're doing. It's just ingrained part of the Rails community. Hopefully, that will become how it is with Ember. But a lot of people are kind of slowly bring their apps Ember so they really have a lot of JavaScript and they don't necessarily know what to do or they're write in JavaScript have always are written with jQuery and a little bit [inaudible]. They don't understand how to test that.
ALEX: How does Ember-try help with that? Actually I want to roll back and talk about what is Ember-try and how does it fit into testing? You mentioned the appraisal gem which I'm not familiar with. I haven't done much Rails in my life or Ruby. But can we talk about what Ember-try is?
KATIE: Sure. Ember-try, at the basis, let's you run different scenarios with your test. At some point, I would've said, let's run different scenarios of dependencies for your tests so primarily changing your Ember version and that's pretty much what add-ons do but a lot of people are using it for scenarios that are completely outside of dependencies so different environment variables, different browsers.
They're just having one place to have all these scenarios, where if you just put it in travis.yml like your CI configuration, you want it as easily be able to run that locally. But with Ember-try, you can do that locally. I found that it's kind of beyond my intentions, expanded beyond dependencies.
Primarily, it lets you run your test in your application with different configurations. I could see running it with different feature flags, it would be what something to be interesting to do, if that's something you use. Primarily, it just lets you try the conversions and appraisal gems let you run test with different gem sets so you have a different gem file for each scenario you possibly had. That was definitely dependency-focus.
CHARLES: That sounds really cool. It's almost sounds like you could even get into some sort of generative testing, where you're kind of not specifying the scenarios upfront but having some sort of mechanism to generate those scenarios so you can try and surface bugs that would only occur outside of what you're explicitly testing for, which is kind of randomly choosing different versions of environment, variables, feature flags, dependencies and stuff like that. They didn't thought of that?
KATIE: Randomization [inaudible] but Ember-try really does have a kind of general route way of working on and that's we're leading to that. If you wanted to, especially for add-ons, you can specify this version compatibility keyword and your packet at JSON and give Ember and give an Ember string a version and it will generate the scenarios for you and test all those versions.
These Ember strings are pretty powerful so you can say specifically versions you want. You can do a range of versions and it will take the latest patch release and a reminder, at least you don't want to be too crazy and test each of those for that add-on. But I can definitely see something random, they're really cool. Some testing thing that's like just tries to do random input into all of your inputs on a page. I've really been meaning to try that out. Sounds like [inaudible].
CHARLES: Yeah, just like to try and break it. I remember a world before Ember-try and I can't speak highly enough about it and the fact that how many bugs it has caught in the add-ons that we maintain because you're always working on the latest, hottest, greatest version of Ember and you're not thinking what about two-point releases back.
They’re might be not a deal breaker but some subtle bug that surfaces and break your tests and the coverage has just gotten so much better. In fact, I think that they're as brilliant as it is bundled with EmberCLI when you are building an add-on. It's like you now you get it for free. It's one of those things where it's hard to imagine what it was like, even though we lived it.
KATIE: And it was less than a year ago.
[Laughter]
KATIE: Ember-try existed for more than 15 bundle with EmberCLI so since last EmberConf or so.
CHARLES: Yeah, but it's absolutely a critical piece of the infrastructure now.
KATIE: I'm glad it caught bugs for you. I don't think I've actually caught a bug with it.
CHARLES: Really?
KATIE: Yeah, but I don't do a lot of Ember re-add-ons. I do a lot of EmberCLI-ish add-ons. It can't change versions of EmberCLI. Not yet, we're working on that. I get some weird npm errors when I tried it but I haven't dug into it much yet.
CHARLES: I don't want to dig too much into the mechanics but even when I first heard about it, I was like, "How does that even work?" Just replacing all the dependencies and having a separate node modules directory and bower and I'm like, "Man, there's so many moving parts."
It was one of those things where we're so ambitious. I didn't even think it was possible. Or I didn't even think about writing it myself or whatever. It’s one of those like, "Wow, okay. It can be done."
ALEX: This exists now.
CHARLES: Yeah.
ALEX: Add-on authors are accountable now for making their add-ons work with revisions or versions a few points back like you said but it makes it so easy. The accountability is hardly accountability, turning Ember-try. It's really amazing.
KATIE: What I'm laughing about is that what it actually does is not very sophisticated or crazy at all. For instance, for bower, it moves your existing bower components structure. It’s a placeholder. It changes the bower.json run install. Then after the scenario, it put's everything back.
CHARLES: But I don't know, it sounds so hard. It’s intimidating. You got all this state and you got to make sure you put it all back. What do you do with if something hit you and abort midway. I'm sure you had to think about and deal with all that stuff at some point.
ALEX: I kill my tests all the time in Ember-try and I was like, "Oops." I forgot I shouldn't do that just like for this.
KATIE: Yeah, it doesn't recover so well. It’s pretty hard to do things on process exit in node correctly, at least and I don't think I gotten it quite right. But there is a clean up command. Unfortunately, with the way it interacts with EmberCLIs dependency tracker so when you run an Ember command, Ember checks them to make sure all your dependencies are installed. If you still have the different bower.json and install haven't run, you have to run install before you can run the cleanup command which is kind of a drag.
CHARLES: I have one final question about Ember-try. Have you given any thought to how this might be extracted and more generally applicable to the greater JavaScript ecosystem because I see this is something that Ember certainly was a trailblazer in this area. Some of these ideas came from Rails and other places. This is going to be more generally applicable and had you given thought to extracting that?
KATIE: Yeah, some of [inaudible] since we first did it because we realized very early on that it doesn't depend on EmberCLI. I didn't even using it as a command line arguments parser which doesn't seem too important. But there are some assumptions we get to make. For it being an Ember app, we know how EmberCLI was structured. Some of those assumptions, I wouldn't really know with the greater node community and I gotten those assumption might not be possible at all because they don't have the standards we have on EmberCLI. It generated EmberCLI. There’s generally certain things that are in place in Ember [inaudible] works for [inaudible] so there's no part of it that could be extracted.
But I worry about some assumptions about no modules always being in the directory that they are in because then can be link the node modules above it. In EmberCLI, it usually doesn't support that. But other places, obviously have to. I realized that it could definitely happen but I'm not so sure that I'd want to personally support that because it's a little bit time commitment.
CHARLES: Right. Maybe if someone from the outside want to step in involuntarily, you might work with it but not to personally champion that cause.
KATIE: Definitely. I think it would be really cool and I do think it will end up having its own [inaudible] parser eventually, just to be able to do things like different EmberCLI versions. As long as it's not part of EmberCLI, I think that would be less confusing, though. In theory, that can be done with an EmberCLI still but I'm not clear on that. I've had people talk to me about that and I haven't fully process it yet.
CHARLES: Right. Alex you mentioned something earlier I had not thought about but that was the technologies like Ember-try keep the add-on community accountable and keep them healthy by making sure that add-ons are working across a multiplicity of Ember versions and working in conjunction with other add-ons that might have version ranges.
Katie, you've been a critical part of that effort. But there's also something else that you've been critical part of that you built from the ground up and that is Ember Observer. That is a different way of keeping add-ons accountable. But I think perhaps, even in a more valuable way, more of a social engineering way and that's through the creation of Ember Observer. Maybe we can talk about Ember Observer a little bit. What it is and what gave you the insight that this is something that needs to be built so I'm going to step forward and I'm going to build it?
KATIE: I'm definitely going to refer again back to the Rails community. I'm a big fan of Ruby Toolbox. Whenever I needed to jam, I would go there and try to see what was available in that category kind of way. There's variables that have on there. It will have something like the popularity in the number of GitHub stars and the last time it was updated.
You can see a lot of inspiration for Ember Observer in there so maybe I should step back and explain the Ember Observer. Ember Observer is a listing of all of the add-ons for the Ember community. Anything that has the Ember add-on keyword will show up there. We pull it off from npm and it all show you all that kind of information: the last updated, the number of GitHub commits, the number of stars, the number of contributors and we put all of that information and a manual view together to put a score on each add-on. You can look at it and we categorized them as well.
If you look at a category, say, you're looking at a category for doing models. You would see all of a different model add-ons and be able to look between them and compare them, decide which ones to use. Or if you're thinking of building something, you can go in there and be like, "This already exists. Maybe I should just contribute to an existing thing."
What gave me the idea for is I was looking at Ember add-ons, which just shows you the most recently published add-ons for that Ember add-on keyword everyday and every time I was clicking on these add-ons I go, "They did the same thing," and it just seemed like such a waste in [inaudible]. People were creating the same things and then they started clicking into and I was like, "Why they bother clicking into this. It doesn't have anything. It’s just an empty add-on." We're pushing add-ons just to try that out so I thought I'd be nice if something filtered that out and I happened to have some time so I got started and dragged my husband, Phil into it.
He's also an Ember and Rails developers so that's pretty convenient and my friend, Lew. Now, Michelle works on it a little bit as well. That’s what drove us to build it and it's been pretty cool. I like looking all the add-ons when they come up anyway. I feel like it's not any actual work for me. It’s quicker than my email each day to look at the new add-ons.
ALEX: How many new add-ons are published every day on average?
KATIE: On average, it's probably four to six maybe but it varies widely. If you get a holiday, you'll get like 20 add-ons because people have time off. You know, if somebody just feeling the grind at two and you'll notice that the add-on struck commeasurably too. It would be else that come on the same kind of week.
ALEX: You mentioned that an add-on gets a score. Can you explain that score and how you rate at add-ons?
KATIE: Sure. The score is most driven by details about the add-ons. There's Ember factors that go into it and it's out of 10 points. Five of them are from purely mechanical things, whether or not there's been more than two Git commits in the last three months, whether or not there's been a release in the last three months, whether or not they're in the top 10% of add-ons, top 10% of npm downloads for add-ons, whether they're in the top 10% of GitHub stars for add-ons.
ALEX: I know that I'm a very competitive person but it also applies to software. Not just on sports or other types of competition. But I remember a moment and I'm an add-on author and my add-on had a 9 out of 10. I was just about to push some code and just about to get a release. Even before that happened, it went to a ten.
The amount of satisfaction I got is kind of ridiculous. But I like it. I like the scoring system, not just for myself but also for helping me discover add-ons and picking the ones that might be right for me. I'll check out any add-on basically that might fit the description. As long as there's a readme, I'll go check it out. But it still helps along with the categorization.
CHARLES: Correct me if I'm wrong but I believe you can achieve an 8 out of 10 without it being a popularity contest. By saying there's a certain concrete steps that you can take to make sure that you have test and you have a readme, that's a thing of substance. I don't remember what all the criteria are but you can get a high score without getting into the how many stars or if you're in the top 10%. I think that's awesome.
But it does mean that if I see an add-on with a five or something like that, it means that they're not taking my concrete steps or it might not be as well-maintained. You know, that's definitely something to take into account. I'm curious if there are any different parameters you've thought, tweaks you thought of making to the system. Because this gets to second part of that, what things had you considered just throughout out of hand, as maybe not good ways to rate add-ons.
KATIE: We haven't thought about everything. I don't particularly like the popularity aspect of it. It did feel necessary to include it in some way. The stars and theory are representative of interest, not as much as popularity but it probably gets into popularity as well then downloads are inferior for popularity.
But the problem downloads and I found this happening more and more frequently is that if large companies start publishing their own add-ons, then they have a lot of developers, they are going through the roof on those downloads so they're getting that to a point that just from their own developers and I have no way of knowing if anybody else is using this.
CHARLES: Probably their continuous integration with containers, right? Like if it's running on Travis or Circle, it's just sitting there spinning like pumping the download numbers.
KATIE: Yeah and that frustrates me quite a bit. But I haven't found another thing that really representative popularity. Unfortunately, with npm you can get the download counts but you can't tell where they came from. There’s no way to do that.
I simply would like to see the things that are popular, they rate higher than the they currently do, like you said either, it's 10 points go without the popularity coming to affect it. But you do need a collaboration with at least one of the person to get eight points. If you give seven by yourself, you need to have another contributor. If you only have one contributor, you don't get that point because that's trying to be representative of sort of a bust factor but it's not truly there. You can just have one commit from somebody else to get that.
CHARLES: Of all the pieces, I think that's totally fair.
KATIE: Yeah, there's definitely a few other things I have in mind to bring in to the metrics but we're not quite there yet. I need to entirely refactor how the score is given so it's not exactly out of 10. The idea is to have some questions and some points that are relevant only to certain categories about us. Whether or not in add-ons testing its different versions of Ember, it might have matters for add-on but it's only adding 10 for CLI or whether or not they have a recent release. Might not matter if it's kind of a one-off with the sass plugin or something more of the build tool that doesn't change for everyone.
CHARLES: Yeah, I've noticed we have that happened a couple times where we've got a component that just wraps a type of input. Until the HTML is back changes or a major API changes happens in Ember, there’s no need to change it. I can definitely see that. How do you market it is like something that changes infrequently. Is it just that add-on author says it's done? You give them a bigger window or something like that?
KATIE: There are probably some sort of categories that fall under that. For the input, I think if it's doing something that Ember producing components, it probably want to be upgrading every CLI, at least every three months. I think in that case, it's probably fair to require an update within that period of time. But some of the things that are more close to Broccoli like as they are in Ember add-ons, that make sense to not have that requirement. Maybe not that's the [inaudible] exact example of the kind of questions that [inaudible].
ALEX: In Ember add-on was the first time that I gave back to the open source community. It was my first open source project and Ember Observer really helped me along the way to say what is an open source best practice and I thought that was really cool. Now, it sounds like with some of the point totals, you're leaning towards Ember best practices to help Ember add-on authors along that way. I think that's really awesome and very, very useful. I would not like to see what the Ember add-on ecosystem would look like without Katie. It would be a very different place.
KATIE: Thanks. I'm glad it had some help on that and I'm affecting add-on authors. I actually didn't originally think about it when I was first building it and I was really hoping to help consuming of add-ons but it really has kind of driven out people finding add-ons to not build because they contribute to existing ones. Also, driving them for the score because as you said, people get very competitive. I really didn't realize what kind of drive the score to be because to me I'm like, "Somebody else's score me. How dare you?" [Laughter]
KATIE: I have had people say that to me, "How dare you score me? How dare you score my add-ons?" Well, it's mostly computerized. Even the review was manual but only thing about it has any sort of leeway is are there meaningful tests. That's really the only thing when I go through an add-on is whether or not that has any sort of leeway for the judgment of the person that's doing the reading. If there's a readme and we're kind of a rubric so that goes if you have anything in there that's other than the default Ember [inaudible] readme. Whether or not there's a build that is based entirely whether or not there's a CI tag in readme, these are for the owner to go look for them [inaudible]. We hope to automate that so I don't have to keep looking for those. A lot of add-ons turned out have to have builds when they didn't have any meaningful tests. They just had tests so that's kind of confusing.
CHARLES: What do you do in that situation? You actually manually review it so that add-on would not get the point for tests.
KATIE: No, they don't get the point for test and if they don't get the point for test, I put 'N/A' for whether or not, they have built so it doesn't apply. It doesn't mean anything to me if they haven't build their own test.
CHARLES: Right, that makes sense. A lot of it is automated but it still sounds like consume some of your time, some of Phil's time, some of Michelle's time. I guess, my question is do you accept donations or a way that people can contribute because I see this as kind of part of the critical infrastructure of the community at this point. There might be some people out there who think, "Maybe, I could help in some way." Is there a way that people can help? If so, I'd love to hear about it.
KATIE: We don't have any sort of donation or anything like that. I mean, we should. We consider it primarily just part of our open source works, part of our contributions to the community because we also make a great deal of use out of the community. Fortunately, it's not very expensive to run. It’s only $20 a month VPS. Other than time, it's not really consuming very many resources. That may change over time. The number of hits is increasing and we're doing some more resource-intensive things like Code Search and we're running Ember-try scenarios from the top 100 add-ons to generate compatibility tables. It hasn't been the most reliable.
Think about if you're trying to do nvm-installed times 100 add-ons, times every day, times the different Ember dependency settings so it's been very much like a game of whack-a-mole but for now, it's not bad. But we probably should think about some sort of donation by then. Maybe something that writes out the exact numerical cost of something like Ember Observer.
The API is getting about 130,000 hits a month but that's the API so that's some number of quests per person. [inaudible] tells me something about 12,000 visitors each month.
CHARLES: Does Ember Observer has an API? Are there any the third-party apps that you know about, that people built on top of the Ember Observer?
KATIE: None that have been kind of public. I know a couple of private companies seem to be hitting the API but it's not a public API. It's really not public yet. I'm literally process of switching over to JSON API and at some point, I'll make some portion of that -- a public API -- but it's pretty hard to support that at the same time. It change Ember server pretty frequently and do any kind of migrations we need to do. Ember add-ons does all the scores from us from API end point.
CHARLES: I actually wasn't aware. I remember the announcement of Code Search but how do you kind of see the usage of that? What’s the primary use case when you would use Code Search on Ember add-on or Ember Observers?
KATIE: I think the primary use case is if you're looking for how to use a feature. If you're creating an add-on and you want to know how to use certain hooks like [inaudible] or something like that. You can do the Code Search for that and see what other add-ons are doing. It’s only searching Ember add-ons that have the repository are all set so you'll only find Ember results. That will be nicer compared to searching GitHub.
Then I find another use case is more by the core team to see who is using what APIs and whether or not they can deprecate something or change something or something has become widely used since we're pretty excited about that possibility, we've never ever been searching.
ALEX: That is brilliant.
CHARLES: Yeah, that's fantastic. The other question that I had was this running Ember-try scenarios on the top 100 add-ons and that's something that you're doing now. Are you actually reflecting that in the Ember Observer interface? Is that an information or is that an experimental feature? Or is that reflected all the way through so if I go to Ember Observer today, I'll see that information based on those computations?
KATIE: I start playing in the Ember Observer interface. It’s only for the top 100 add-ons currently but hopefully expanding that to all add-ons. Especially, for few months but maybe it's not easy to notice. The only on top 100 add-ons would be on the right side bar and there will be a list of the scenarios we ran it with and whether or not it pass or not. There's add-on information there. The top 100, I’ll link to right on the main page of Ember Observer so you can see in the front.
CHARLES: How do you get that information back to the author of that top add-on?
KATIE: We haven't actually done that. It's just on Ember Observer. It’s more meant for consumers to be able to see that this add-on is compatible with all these version. We’re not using their scenarios. We’re using our own scenarios saying Ember from this version to this version, unless they have specified that version compatibility thing and then we'll use those auto-generated scenarios.
This might get harder for add-ons that have complex scenarios so it need something else to vary along with the versions like Ember Data or maybe they're using Liquid Fire and Liquid Fire have these three different version and for each versions, it's being used. For those, we’ll just have things which are unable to test this. But hopefully, this is still providing some useful information for some add-ons.
A lot of add-ons, their build won't run unless they commit. In this case, this is running every night so new Ember versions released will see if that fails. On other side of it, we have a dashboard where we can see which add-ons failed and maybe see if a new commit to something broke a bunch of add-ons. It commits to something like Ember and for CLI Ember-Gate, one of the main things.
CHARLES: I know that certainly that right after we get over this podcast, I'm going and running or checking up all add-ons that we maintain and making sure everything is copacetic. If you guys see me take off my headphones and dash out the door, you know where I'm going.
KATIE: Got you.
ALEX: I just have a further comment that I'm excited for the public API of Ember Observer just because I've been thinking a lot about data visualization lately. I think it would be really cool tool to do a deprecated API, like one of those bubble charts where like the area that's covered by this deprecated API -- I'm doing a bad job of explaining this -- just like the most use deprecated API methods and visualizing that, I think they'll be really interesting to see it.
CHARLES: Right, seeing how they spread across the add-on.
ALEX: Yeah, or just all add-ons in general.
KATIE: I am most nervous about a public API for Code Search, though because it's a little bit resource-intensive so just freaking out a little bit about the potential of a public API for it. But an Ember Observer client is open source. If you want to add anything to the app, that I consider as public think it is.
Adding to that, I really do want to figure out some way to have like a performance budget for when people add to the client because sometimes I'll get people who want to add features and I'm like, "That's just going to screw all of it. It's going to be a problem for all Ember Observer and it's going to make everything slower and it's really little slow. But JSON API fortunately, I have kind of a beta version that running and it's going to be much faster, thank God. I probably shouldn't said that either.
CHARLES: Definitely we want to get that donation bin set up before the API goes public. Okay, let's turn to the internet now and we'll answer some of the questions that got twitted in. we've got a question from Jonathan Jackson and he wanted to ask you, "Where have you seen the most change in the Ember ecosystem since last EmberConf?" which was March of 2016.
KATIE: There was definitely fewer add-ons being published but the add-ons that are being published are kind of, say more grown up things. We’ve got... I don't know if engines was before or after March. I have no idea. Time has one of those things that engines and then people doing things related to Fastfood so things are coming from more collaborative efforts, I think.
This is just my gut-feeling. I have no data on this. Isn’t a gut-feeling from looking at add-ons and then there's a lot of add-ons that are coming out that are specific to a particular company. I think that maybe, I hope representative of more companies getting it to Ember but hopefully, they'll make things more generic and share them back.
The other problem with the popularity is like about before, where big company is getting itself into the top 100 list, probably with just its own employees only appear over the summer. I tried a few different ways to mutate the algorithm to try to get them out of there but there was no solution there. It's much fewer, novel things. Very rarely do I look at Ember add-ons and I'd be like, "Oh, that's great," but when I do, it's something very exciting.
CHARLES: Right so there's a level of maturity that we're starting to see. Then I actually think that there is something in the story too, of there are now larger companies with big, big code bases that have lots of fan out on their dependency tree that just weren't there before.
KATIE: Definitely. I don't think some of the large companies were there before but I think some of the largest companies are probably keeping most of their add-ons private so there's kind of mid-range of company that's big enough to donate things or willing to put things open source. A few of these companies that can have a lot of add-ons now and a lot of them are very similar to things that have already existed so you're going to be like, "I don't know why I use this," but they obviously make changes for some reason.
CHARLES: The other thing that I want to talk to you about, before we wrap up, is you actually are in partnership in Code All Day. What kind of business is that? What is it you guys do? What's it like running your company, while on the same time, you're kind of managing these large pieces of the Ember ecosystem?
KATIE: Code All Day is very small. It's just me and Michelle. It's a consulting company, we kind of partnered together after we left to startup and decided to do consulting together. We primarily do Ember projects, also some Rails and we try to work together and we love test-driven things.
It’s pretty kind of loose end. We ended up running it since it's just a partnership. We don't have any employees. But Ember Observer will take up a lot of our time and we really had an idea that it might help us get clients that way so I suppose it kind of helps our credibility but it hasn't really been great for leads so much. But fortunately, there hasn't been a big problem for us. We really enjoys spending our time. We enjoy the flexibility that consulting gives us and while that flexibility is what's going to making these things keep running.
CHARLES: All right-y. Well, are there any kind of skunkworks, stealth, secret things you've got brewing in the lab, crazy ideas that you might be ready to give us a sneak preview about for inquiring minds that may want to know?
KATIE: Some of them are really [inaudible] which is redoing Ember Observer with JSON API instead of currently, it's using ActiveModel serializers, which is a kind of custom API to Rails and [inaudible] fortunately, it's an API now. They're removing something called JSON API Resources so that will get the performance of Ember Observer much better and that's pretty much my primary focus at the moment.
I don't really have any big skunkworks, exciting projects. I have far off ideas that hopefully will materialize into some sort of skunkworks projects.
CHARLES: All right. Well, fantastic. I want to say thank you, Katie for coming on the show. I know that you are kind of a hero of mine. I think a lot of people come to our community and they see like, "Where's the value in being a member of this community, in terms of the things that I can take out of it? What does it provide for me?" And you demonstrate on a day-to-day basis, asking what you can do for your community, rather than what your community can do for you, to paraphrase JFK.
I think you live that every day so I look up to you very much in that. Thank you for being such a [inaudible] of the community which I'm a part of and thank you for coming on the show.
KATIE: I'm very happy that I've been here and thank you. I use a lot of your guys add-ons and it's really the community has given so much to me, which is why I ever want to participate in it. It’s really great group of people.
CHARLES: Yep, all right-y. Well, bye everybody.
ALEX: Bye.