About three years ago I started using R (see The R Project for Statistical Computing). I won't lie and say it was trivial to get going, but coming to it from a die-hard developer perspective, with many languages and programming paradigms under my belt, some of which I dare say I'm quite skilled with, it didn't take me long at all to get productive.
I've completely retired my use of Excel and similar tools for those parts of my job which require ad-hoc data analysis. R has become my go-to tool for any kind of quick data crunching, albeit often on large enough data sets to make Excel&Co puke. I use it for everything from carrying out quick assessments of customer data sets, prior to full analytics, all the way through testing out new analytic approaches early in product/feature development stages.
The flexibility and programmability suit me as a developer, much more than anything I could find with Excel. The native vector processing model is particular suitable for the kind of work I do. I've seen a few articles of late indicating that while R might be one of the more popular langs for data science, Python is coming up fast, some would say taking over The homogenization of scientific computing... For now though, I'm very happily in the R camp. There's plenty of great work going on in the wider R ecosystem, ramping up the scalability('Big Data') capabilities e.g. Revolution Analytics, adoption and support in various forms by some of the big players e.g IBM SPSS & R, Oracle R for the Enterprise. Finally, you'll get a sense for the vibrant community by looking through this site r-bloggers. Oh, and of course, it's free.
I'm very happy to have made the investment in learning R. If you haven't done so already, you should give it a look - before it becomes completely mainstream and boring, and you are forced to find another exotic language to stay away from the crowd :-)