Those who work with me know how I hate maintaining anything with Ruby inside. As I often state, everything written in Ruby is a load of crap, except maybe Redmine which is not that good either. After another night spent trying to kick some sense in GitLab I thought — why is that? Does Ruby promote bad programming?
Well, yes, it does. Not directly but by not enforcing any good practice. Ruby is like Lego for programmers — it's fun to play with and rather good for prototyping, buy when it comes to production — you'd better give it up. If only every programmer did like: «Okay, let's play a little and see if it works, and if it does — let's sit down and do it properly»!
I suppose Ruby is just too safe and permissive. When you learn Python — you quickly grow tired of all the exceptions. You try to open a file — BAM! — IOException! You try to int() a string — BAM! — ValueError! Exceptions bubble up from every hole, and your application crashes to command prompt all the time until you plug all of them with try: ... except ... blocks. While you learn Perl — at some point you cry like a baby trying to read your garbled code, and then learn to write it clearly.
While learning programmer sometimes have to shoot himself in the foot to see why it is bad. Ruby is like a Nerf gun — well, you shot it all right but it's not painful at all. Then why be afraid of this? Let it go already, let's try another cool pattern.
Your feet are supposed to hurt when you shoot them, to teach you not to do this. It's hard to explain to a child why he shouldn't touch steaming kettle — he won't understand until he tries. It's hard to learn to catch exceptions when they are not a big deal at all. You rarely cry when you learn programming in Ruby, the bad thing is that a system administrator will cry a lot when something goes wrong with a production install.
With Ruby on Rails they state it on their title page: Web development that doesn't hurt
, like it's a good thing. The result is that it's too common when the exception in RoR application goes uncaught until a certain template! You see a 500 error in your browser — okay, let's look up the logs and see what is wrong. You open production.log to see only a quite obvoius statement that NilClass does not have a sha method. You have to google the backtrace for some time to see that this particular statement is the result of failed external command somewhere deep in the guts of the application — a possibility that a programmer clearly neglected. Why? Because his feet did not hurt when shot with ruby-railgun.
So, the only indication of problem is often not a message itself but a call trace. Funny, isn't it? You rarely see this in Python or Java because they teach you quite early that a try/catch is a must. Even more funny, the backtrace message changes — e.g., line numbers — between the versions so you cannot just google the whole log fragment to see if somebody had this particular error before. You have to strip it first to receive a relevant result, and if you remove too much detail — you won't find anything. As not everybody is a Google-Fu master there are lots of duplicate bug reports over the issue trackers, with the same cause of error but a different stack trace. Why litter up the issue tracker if you could just catch the exception when it happened? Rhetorical question.
Yes, they do a lot of testing. But as the exceptions are not raised it does not help much — so error is just ignored until somebody trips on it on production.
And, to add some spice to the problem of uncaught exceptions, you can easily guess the most common way to fix errors of this sort. Yes, they do fix the individual templates — adding some checks to prevent 500 page instead of going all the way down the call stack to catch the bloody exception when it happens!
Another problem with Ruby application is that Ruby programmers tend to use the most advanced tool for every task.
So, when they need to split up their application to perform some task in background they usually re-invent the wheel in the most fantastic way. For example, instead of learning how inter-process communication is done they take Redis database and write a whole Sidekiq. What D-Bus? It's something related to GUI. What Beanstalk? Too much docs. Let's do it in Ruby, with all the fireworks and resource usage. You don't have Redis on your server? Then install it. You don't have Ubuntu as we have? Then install it. You don't have enough memory? RAM is dirt cheap, just buy more. You get the picture, right?
Adding services to your system seems an easy task for Ruby programmers. Well, what's so complicated with apt-get install redis? It's hard to explain to them that every production service needs to be monitored and provisioned for, the answer is usually like «Yeah, right, it's your job so do it». They would happily use Hubble telescope to hammer a nail, leaving the process of launching a thousand of Hubbles to us sysadmins.
Having no understanding of the «right tool for the job» combination of words Ruby programmers drag every tool then see fit into their projects. The problem is complicated further by the fact that most of Ruby libraries, or Gems, are written in exactly the same way — i.e., requiring tons of other libraries to work. 50 or 60 RubyGems for a simple application? Easy, why write the code if you can just use a library?
To help with sucking up all the libraries there is a great tool, Bundler. Yeah, Gemfiles everywhere, bundle install, sit back and relax for 30 minutes while it is doing all the work.
Do a gem list -l afterwards to see the result. 50+ gems, okay. Many of them have different versions installed simultaneously — is it just an error in Gemfile or a real incompatibility no matter. To help you with the multi-version headache you now have bundle exec ..., let's hope it won't melt your brain if it doesn't work. 15 year ago we were concerned with DLL Hell, weren't we? Well, Ruby Gem Hell is rather cool and quiet.
Another problem is that effectively there is no easy deployment process for Ruby applications. Many gems require «native» binaries to be built so you have to have a working development environment on your live server. You have to build and install binaries all the time. Is it easy to do this on 100+ servers at the same time? Well, I'd use rsync to keep /usr/lib/ruby/ and /usr/lib64/ruby/ in sync with a «master» server, I suppose it would work but it is ugly, isn't it?
The worst thing is that tools like bundler promote the zero-brain approach. There is no incentive to learn how it works if there is the ultimate tool to do it. First you use bundler to drag and install 50+ libraries for your «Hello, World!» application, then you use apt-get install ... to fetch half of the multiverse repository on your computer, and finally the things suddenly become too slow and very fragile for some unknown reason.
Well, if Ruby is so bad as I state — why are there so many users?
Okay, one reason is that a good prototype just works too good to redesign. Yes, there are many features in Redmine already so why rewrite it in Python from scratch? Too much time and effort, and if you think of compatibility — you'd grind your teeth. Just take a look at Redmine database and think of rewriting every model. It's easier to accept the bugs and quirks than to write a better tool. And if you add plugins there...
Another reason is that there are masters of Ruby. Personally I think that they didn't start with Ruby, just carried a fair amount of knowledge with them. Yes, it is possible to use Ruby and RoR correctly, writing good and maintainable code. It won't be easy and will require much computing power to run smoothly but it is still possible.
And, finally, there are large Ruby users like GitHub or Gitorious. Because they are large they don't have «Hubble and the nail» problem as their nails are big enough for a Hubble. A frequent argument in favor of Ruby is «Look, GitHub uses it and they are okay so far». Well, just think if you can spend as much on hardware as they do...
If you are not a qualified programmer — you'd better stay away from Ruby until you master the art of programming. Probably you won't have enough use for Ruby after that but there won't be much harm in trying.
Computer programming is better learnt on platforms that allow you to experience the consequences of bad programming. Maybe Perl will be too painful here as it can rip your leg off easily but UNIX geeks love it for this. PHP won't be a good choice either (actually most of the Ruby problems start to apply to PHP as well). I'd suggest Python if I was a programmer but as I am not one you're on your own here.
I think Ruby is just too Japanese-polite — it doesn't tell you when you are wrong but tries to guess what you were trying to do. It may be good for prototyping but for actual live applications it's rather a bad thing than an advantage.