Some great news from the BI conference in seatle, MS has unveiled the new release of MS BI. The news looks great and exactly what is missing right now. A bright future for SSAS.
A post from the ms SQL blog:
“Saw an amazing BI demo this morning at the BI Conference here in Seattle. Donald Farmer showed how over 20M rows of data can be modeled and analyzed in memory. To build a model today, a DBA needs to define dimensions and fact tables, get the relationships right, define calculations, deploy it to a server, build and manage it. After that, someone can connect to it and play with the data.
What Donald showed is how a user can do all that with an Excel add-in. He started with 20 million rows in memory on a no-frills PC and built a model from scratch. Even with 20 million rows, interactive ordering, filtering and windowing and pivoting was instantaneous. It’s difficult to compare how much simpler it is building a model with actual data than it is building one with abstractions and not seeing the result at the end. You might remember the days before WYSIWYG when documents were built with formatting and font codes and not seeing how it would look until it hit the printer. This is doing the same thing to data analysis with non-trivial amounts of data – WYSIWA (what-you-see-is-what-you-analyze).
And – this bears repeating – it’s all in Excel.
The next thing was striking as well and likely will have just as much an impact – being able to publish the model to SharePoint and other people being able to access it from a URL. So normal people (and not just data geeks like me) will be able to start with vast amounts of data, build a model (without even realizing it!), analyze the data, and post the result (with all the data) to share with others.
Released on the next version of SQL Server code named Kilimanjaro which is a focused release of new BI capabilities. It will be available sometime in the first half of calendar year 2010.”
And from Teo Lachev:
“While I need to get my hands on this Gemini thing to say something worthwhile, I really hope that existing UDM cubes could benefit from it as well, especially in terms of performance.
Today, folks are pushing SSAS to its limits. A dashboard page, for example, may need to execute many queries and crunch huge volumes of data to present trend graphs within seconds. It will be cool if Gemini lets you cache subcubes in memory to speed up these scenarios. For example, if you need to implement a bunch of customer-related KPIs, it will be nice if you could tell Gemini to cache in memory or materialize to disk the pre-aggregated at the customer level data and which dimensions can be used to slice these KPIs.
What about giving the business users the option to create ad-hoc cubes? I have to admit I have mixed fillings about this. The term “OLAP” alone is known to cause severe brain crunch to many users. I am a bit skeptical that “off you go, start building your own cubes in Excel” philosophy will really fly. If you package a cool wizard that hides some of the dimensional model complexity, how would verify that the results are indeed correct so you don’t end up with as many versions of the truth as the number of users? How would teach end users MDX to create their own calculations? Not sure how much of your time Gemini will really save if this is its major selling point. But again I may change my point of view as details unfold as life often proves me wrong.
Meanwhile, long live MDM and Kilimanjaro, which is the code name for SQL Server.NEXT, scheduled for H1 2010!”
All sounds really really good !