|||

Video Transcript

X

Where Will Ruby Go Now? Talking with Tenderlove at RailsConf

DSCF4286 Last week at RailsConf in Kansas City, Terence Lee and Richard Schneeman of Heroku’s Ruby Task Force sat down with the legendary Aaron Patterson (AKA tenderlove).

Aaron has been working hard to make Ruby three times faster — a goal that Matz called Ruby 3x3. Along the way, Aaron has discovered that Ruby may face a hard decision. On one side, Ruby can continue to be the productive, general-purpose scripting language that it looks like today. But the other side of Ruby is that it’s used to run long-running processes in Rails applications, pushing it to be more performant, strongly-typed, and memory-heavy. Ruby can't prioritize both.

To find out where Aaron thinks Ruby’s going, you can read the abridged transcript below the fold — but to hear all about his new job at Github, Ruby performance, mechanical keyboards, grumpy cats, and more, you should listen to the whole recording right here.


Richard Schneeman: The stuff you've been working on — we've seen you tweeting a lot about Ruby performance — how's that coming along?

Aaron Patterson: Good, although, we just stepped out of Koichi's talk, and he basically ruined my talk [laughs]. But I've been working on different stuff. Like loading precompiled byte code, and looking at faster method caches or different types of method caches. Other things I've been thinking about is just — I'll cover this in my talk tomorrow — but improving memory efficiency, that type of stuff. Mostly what i've been poking at.

Though in Koichi's talk, [he said] loading precompiled code isn't helping that much, but I got different numbers than he did, so I'm hopeful, I guess? The thing is though, when Koichi's putting out numbers, I watch and i'm like - he's probably right! So yeah, I'll be talking about that stuff tomorrow.

Richard: so you just threw out a bunch of performance optimization stuff. I'm kinda curious, where do you get inspiration from? Do you just pick up and say, 'i'm gonna try this technique?' or do you look at what other languages are doing, or are there resources or some VM implementer's handbook for performance that you're skimming off of?

Aaron: No, typically, I just read about stuff like what JRuby — to be honest, it's mostly me just talking to the JRuby folks. 'So, hey, what do you folks do to improve performance on your VM?' And they tell me that stuff, and I'm like, 'hm, I wonder if MRI does that? Oh no they don't? Let's try that!' So it's mostly just stealing. That's basically where I get my inspiration from — all the hard work that everyone else has done.

Terence Lee: Well I feel like all the great software, a lot of the great software eventually does that, right? I remember Yehuda was talking about how Ember will eventually steal all the great ideas from everyone else, and then it will be great again.

Aaron: I think it's interesting though — one thing, you know, I don't know, Koichi keeps talking about these incremental speedups and stuff, and he's like, 'well, it's not that much.' And I understand it's not that much, but cumulatively, we combine all these not-that-much optimizations together and eventually we'll be, you know, 3x faster. So I don't think he should discount that type of work.

Richard: Absolutely, Koichi has done some fantastic work.

Terence: So with regards to Ruby 3x3 and performance, I know last year you were working on a JIT [just-in-time compiler]. Is it still legit?

Aaron: No it's not. It's not [laughs]. It's too hard. It's way too hard. It's super hard. And there's concerns about, like, how much memory consumption, stuff like that. I haven't been working on that lately. Basically, I got a little bit going, and then I was like, you know what? This is hard. So I quit. [laughs]

Richard: Would any of the ahead-of-time compilation work help the JIT at all? Or is it really tangential?

Aaron: It's tangential. I mean you can do some stuff, like if you do ahead-of-time compilation and you, let's say, run your application and take statistics on the stuff that you execute, maybe you can take that bytecode and perform optimizations on it afterwards.

But that's not just in time. We did some analysis and we can take this bytecode and I can actually improve the performance of this bytecode based on the stats that we took from running the website or whatever. But that's not just-in-time, it means you have to run your slow code for a while to figure out what to do and modify the bytecode later. That's one technique you can do, but I don't think that technique is nearly as popular as just doing a JIT. Because the JIT, you're essentially doing that, but you're doing it then, while you're running.

Richard: Doing it live.

Aaron: Yes, doing it live. I mean I think that we'll have, I hope that we'll get a JIT, it would be nice.

Terence: Do you think a JIT is required to hit the 3x3 performance goal?

Aaron: Yeah, definitely. I absolutely believe we'll need a JIT for the 3x3.

Every time I've talked to Koichi about doing a JIT, he's like, 'no, dude, it'll take too much memory,' is basically the argument. So I know that memory requirements are important for people who run on platforms like Heroku, so having a low-memory solution is important. On the other hand, I keep thinking like, well, it may take more memory but we could have a flag for it, or something like that. You could opt-in, like ruby --fast, and have it go–

Terence: The JVM has development and server mode too.

Aaron: Yeah, exactly. You can flip that with flags and stuff. So I don't really see how memory usage would be that much of an argument. Though, when you look at the stuff that Chris Seton has done on Truffle Ruby, so Truffle + JRuby — that thing is super fast, but you'll notice that in presentations that they never talk about memory consumption. They never talk about it.

Terence: He also doesn't talk about the warm-up time.

Aaron: Nope, they don't talk about the warm-up time. Nope. Not at all. It was funny, I read his PhD thesis, which was really awesome, it was a good paper, I recommend people to read this thesis. But it's very specific: 'we do not care about' — there is a line — 'we do not care about memory.' [laughs]

Richard: I mean memory's cheap, right? Just go to that website, downloadmoreram.com! [laughs]

Terence: Presumably, if it initially took a lot of memory, there's work and optimizations that can be done to ratchet that down over time too, right?

Aaron: Sure. I guess… you can't please everybody. If you make it use more memory, people are gonna complain that it uses more memory. If you make it use less memory but it's slower, people complain that it's slow. Everybody's gonna complain.

Richard: I guess Ruby was a originally a general-purpose scripting language, and I think that that's some of the emphasis behind, 'we don't want to emphasize too much on too much memory usage.' Do you ever think that — [Aaron pulls out a keyboard] — do you ever think that attitude will change from Matz and the Ruby core to say, yes! We are more of a long-lived process type of a language? Or more like living with some of these tradeoffs, where maybe we run with this flag, but I guess then we kind of end up in Java land where there's like a billion flags?

Aaron: Well, I think we have to move in that direction. I mean I know that one thing I really like about Ruby, or why I love Ruby so much is part of that fast bootup. I can write a quick little script, run it and it's like, I did my job, now go off to do something else.

But on the other hand, when you consider how people are making a living off of Ruby, how Ruby is feeding your family, it's typically with long-lived server processes. You're writing websites and stuff with processes that live for a long time. If we wanna continue to use Ruby as a way to make money, we're gonna have to support that stuff long-term.

I'm sure that the attitude will change, but I don't think it's gonna be an extreme direction. It'll be somewhere in the middle. We'll find some sort of compromises we can make.

Originally published: May 12, 2016

Browse the archives for news or all blogs Subscribe to the RSS feed for news or all blogs.