Top Five Features In Microsoft’s SQL 2012

Forrester Research has just released Microsoft’s positioning as a leader in the “The Forrester Wave™: Self-Service Business Intelligence Platforms, Q2 2012″ report. SQL 2012 has a lot to do with this.  Forrester highlights Microsoft’s approach to delivering self-service BI through the familiar tools of Excel and SharePoint. According to the report, ” Microsoft has built on its broad, omnipresent, SQL Server-based BI functionality, which already includes Integration Services, Reporting Services, and Analysis Services. It now offers a very respectable PowerPivot and Power View in-memory data analysis and discovery engine. Think of this as Excel on steroids.”

So talking about SQL Server 2012, there are quite a few new features that are worth knowing about, particularly from a development perspective.

#1 Power View

Power View is a great new ad-hoc reporting tool built for end users.  It requires SQL Server 2012 and SharePoint 2010 and has one of the coolest UI displays compared to the previous tools.  Power View gives end users an intuitive ad-hoc reporting tool they can use to easily create and interact with data from PowerPivot workbooks or tabular models deployed to SQL Server 2012 Analysis Services. It runs in a browser and uses Silverlight from within SharePoint Server 2010.  One of the coolest graphing tools in Power View is the time line. You can build a chart and place a time line along the bottom and click the play button and watch the date change over time.

#2 SSIS Configure and Execute With T-SQL

SSIS has improved significantly in SQL Server 2012.  The greatest new feature is the ability to run and control your SSIS packages using T-SQL.  With the addition of the new SSIS catalogue and the project deployment, your packages are housed in a database  now and there is an entire set of stored procedures and functions for SSIS administration.  This opens up a lot of possibilities.  Imagine writing complex stored procedures that call multiple SSIS packages, or using a cursor to execute a package for each row on a table. You can select data from a table and pass it to a package using parameters.  This is much improved from the command line prompt method in the previous versions of SQL Server.  It gives developers and DBAs the ability to incorporate SSIS more into their database and their development.

#3 SSIS Parameters & Environments

Parameters and environments allow developers to pass variables into packages now without using configuration files or tables.  Developers don’t have to manage a group of files or tables seperate from their packages.  They can execute packages with T-SQL and pass in the parameter values using the T-SQL.  They can also save multiple parameter values in the new environments.  Environments can be thought of as a parameter bucket that holds a set of parameter values.  Packages that need a different set of values can have all the parameters changed with just a simple change of selecting a different environment.  A good example of this would be a package that needs one set of values during the week and a different set on the weekend or at month end.  You can execute the package with T-SQL, so you can place logic in your T-SQL, like a Case When statement, to select the proper environment.  You can also use them for Development versus Production settings.

#4 SSAS Tabular Models

Developers now have the ability to create tabular models in SQL Server Data Tools (SSDT, Formerly BIDS).  Instead of having to use PowerPivot in Excel to develop a tabular model, developers can now use a tool they are more familiar with, Visual Studio.  This gives developers the ability to create and deploy tabular models using SSDT.  These are available to the end user to connect to PowerPivot and start slicing and dicing data.  The diagram view makes it easy to visualise the data and build hierarchies.  Before SQL Server 2012, PowerPivot was the only way to create a tabular like model in SQL Server.  This new model puts the developers in a familiar environment and allows them to create models for the users to easily consume. 

#5 SSIS Undo/Redo

I know this seems like a small improvement, but for SSIS developers, they know the undo/redo feature added to SSIS makes life so much easier.  Imagine writing a word doc and not having undo.  Oops, you accidently deleted an entire paragraph, now write it again!  In SSIS, that was the norm, until SQL Server 2012.  Now if you delete some task or make changes you need to reverse, CTRL+Z is here to save the day.

User Driven Business Intelligence: 7 Tips For Effective Data Visualisation

Sophisticated data models won’t do us much good unless decision-makers are able to interpret, understand and act on the results appropriately. Here are seven principles for designing analytic apps that lead to high user acceptance and results, from our team of consultants. 

Understand What Users Will Do With The Results

Analytic interfaces should be driven by an understanding of what users will do with the results. Frame the discussion on uses around role-based design, with sensory cues directing action on only the most critical pieces of information.  Role-based design of interfaces is important to match analytic needs to support the different type of decisions people may need to make throughout the organisaton. While scores of workers from management to the contact centre might benefit from access to analytics, different interfaces may be required to serve different job roles. 

Remember less is always more. Good analytics interfaces show the information most critical to the user – not every piece of information that might be available for analysis. Sensory cues direct attention.  Good interfaces exploit people’s abilities to perceive patterns based on position, size, shape, color and movement. These properties highlight important features that might otherwise be lost in a table of numbers.

Let Users Lead

Start from user needs and work backward to design the interface that supports those needs, ultimately to the analytics that will drive that interface. Even when users can’t specify in advance what they really want, it’s critical to involve them early and often as analytic interfaces are designed. Sometimes users may not be able to define it, but they’ll know it when they see it. Users are even better gauges for bad interfaces – if enough users believe an interface is unsatisfactory, you’re well-advised to accept their judgment.

If you are contemplating giving users the ability to set analytics modeling parameters, determine if they want to set those parameters and that they know how to do so or at least give them default values. Users can help identify early wins the designer may not have thought of and might provide useful introductions to other potential users and their communities. A user who feels a sense of ownership in interface design can become an advocate for the technology respected by other users. Users of different abilities may point out accessibility considerations, such as how and when color is used so colour-blind users get the same information from the intensity of the display.

A Picture is Worth a Thousand Numbers

Because of our human ability to understand relationships quickly based on size, position and other spatial attributes, the eye can summarise what might otherwise require thousands of numbers to convey.

From Analytics to Action

An analytics interface may be visually appealing, but if it doesn’t stimulate action, it’s not going to be very effective. Good interfaces provide the context to let the user know when action might be required. Analytic dashboards alert users to potential performance issues and provide actionable information. Good interfaces provide context to interpret results that suggest what the user might do next and provide mechanisms such as clickthrough to facilitate an explanation and further analysis.

Apply Design Principles

Apply principles for good visual design. Displays of related information are horizontally and vertically aligned so the eye can see patterns across related variables (they do not have unintended alignments that suggest misleading or irrelevant comparisons). Colour serves to highlight exceptions, not to brighten up a dull dashboard. Analytic results are not presented to 10 decimal places when the user does not need such precision to make a decision. Good interfaces avoid 3-D effects or ornate gauge designs when simple numbers, charts and graphs will do.

Self-Service BI Helps Manage The TCO of Business Analytics

Aberdeen’s April 2012 survey on agile Business Intelligence (BI) showed that the top driver of BI are the managers demanding better tools to help make sense of their data (53%).  While most companies may have recognised their need to create more effective BI strategy, many still struggle to deliver the tools and technologies efficiently while mitigating the various costs associated with deployment, such as software licenses, hardware requisition and related services.  Aberdeen’s research shows that companies are spreading BI costs to more decisions makers, creating an effective self-service environment and enjoying a faster and more meaningful ROI as a result.  So let’s have a look at some of the key findings:

The Cost of Delivering BI: Best in class companies use half of their total spend on services, compared with 23% and 18% for Industry average and laggard companies.  Top performing companies place a high value on tailored analytical funcationality as a means of improving the usability and adoption of BI.

Key Pressures Driving a Focus on TCO: The top pressure driving a focus on the TCO of BI is the increasing demand for decision support from the business (53%), followed by growing volumes of data (41%), business users needing to make decisions anywhere, anytime (35%) and BI projects taking too long or being too resource intensive (23%).

How Best In Class Companies Approach BI: 92% of BI projects are delivered within budget, 87% of BI projects are delivered on-time or early and 64% of users leverage BI weekly or more frequently, compared to an industry average of 75%, 61% and 48% respectively.  

There are two basic ways to reduce BI cost per user, spend less money on BI or increase the number of users. Best in class companies do both – by enabling a self-serve environment for BI that requires minimal IT intervention for day-to-day analytical activity. The research found that best in class companies have a standard plan in place for implementing BI projects, business users can tailor their own anaysis without IT, BI support is decentralised and resides in the business units and the time required for each stage of the project lifecycle is measured.

Listen to your users, deliver on their needs: One of the keys to TCO management with business intelligence projects is deploying the appropriate tools to the appropriate people. Time and money is wasted when technology goes underutilised.  The research shows that Best in class companies are over three times more likely to poll end-user requirements for BI.  While 75% of these companies provide users with the ability to customise their own reports and data views without having to rely on IT. With more relevant, well-tailored solutions to deliver, Best in Class companies are in a position to increase adoption more rapidly and create a self-service BI environment.

Measure, manage & improve the BI deployment process: Best in class companies are more than twice as likely to measure and document each stage of the project lifecycle in order to inform and improve future BI projects. This capability is a cornerstone of being able to deliver BI tools cost effectively.

Automate the creation & delivery of standard reports: to free up IT resources for more beneficial activity like ensuring data quality and the usability of the data infrastructure.

Consider investing in external BI services: Not only do best in class companies spend more money on services as a percentage of total BI spend, but 90% of these top performers are employing external implementation and maintenance services when it comes to BI.  This expertise can be a powerful way of increasing adoption, utilisation, creating a more self-sufficient user base and ultimately driving down the TCO of business intelligence.

Using QlikView’s Repository Panel

Some QlikView users may not be aware of QlikView’s repository panel. The repository is a QlikView app level entity that lists all sheet objects, dimensions and expressions that have been used within the current app. 

The repository panel is an AJAX client only entity.  A user can view the content of the repository and elect to reuse any existing chart or definition by dragging and dropping that definition out on to a sheet.  A chart can be pulled from the repository panel.  Exposing its properties, a user can then change the dimensionality or measures within the chart.  Instant feedback gives the user immediate insight into what they have created.

How does using the repository panel benefit the users? A common reaction from IT and business process owners alike is that they may not trust their users to have the know-how to create QlikView objects. The repository panel aims to eliminate these fears by giving an opportunity to train users to reuse what is already in their app.

Big Data….Big Deal?

There is a lot of hype in the world of reporting and analytics at the moment around Big Data. Is Big Data the silver bullet for all BI problems or is it just a passing fad?

Traditional definitions of Big Data refer to it in the context of Volume, Variety and Velocity. In this sense, it is about more than just size, it also about different types of information such as structured and unstructured, and the speed with which this is generated and changes.  For example, retail giant Wal-Mart feeds more than 1million transactions an hour into databases estimated at more than 2.5 petabytes.   Facebook’s estimated 750 million users create an average of 90 pieces of content each month and an average of 294 billion e-mails are sent every day.

Despite this, Big Data is in some ways not a new concept. Organisations have always been adept at generating large amounts of data and the explosion in variety and velocity through things such as social media and the incessant chatter of the ” internet of things” threatens to be the proverbial straw that breaks the camel’s back.

The real challenge with Big Data lies in creating meaning through insight and it is here that technology enablement is at an inflection point. Advancements in analytical capabilities are now at a point where the creation of meaning from mass is achievable.  Economist Intelligence Unit research indicates however, that most businesses have made slow progress in extracting value from big data and some companies attempt to use traditional data management practices on big data, only to learn that the old rules no longer apply.

Technology is however only part of the solution and having a technology centric approach will likely result in organisations being overwhelmed by information irrelevance.  Data does not make decisions, people do, and more data does not automatically result in better decisions.

Miller’s Law suggests that the number of objects an average person can hold in working memory at a point in time is 7 plus or minus 2. While the ability of technology to store, process and sort data has grown exponentially over time, human cognitive ability has arguably not grown at the same rate. The challenge for Big Data is therefore one of relevance – how do you synthesize the masses of data into the 7 plus or minus 2 objects to support effective decision making?

Keeping this in mind Big Data does offer tremendous opportunity to drive value. A recent study by McKinsey has suggested US retailers could realise in excess of a 60% improvement in net margin and consumers globally 0 billion in value through the creative and innovative use of Big Data. They have further indicated that Big Data will become a key basis for competition and growth.

While another global survey In June 2011 by the Economist Intelligence Unit of 586 senior executives, looked at the state of big data, the organisational characteristics of companies that are adept at extracting value from data and the most challenging aspects of data management. The research found:

  •  Extracting value from big data remains elusive for many organisations. For most companies today, data is abundant and readily available, but not well used. Nearly one in four said the vast majority of its company’s data is untapped. Another 53% say they only use about half of their valuable data. Yet 73% say that data collection in their organisation has increased over the last year. These figures indicate that organisations are still learning how to manage Big Data.
  • Many companies struggle with the most basic aspects of data management, such as cleaning, verifying or reconciling data across the organisation. Nearly one-third of respondents admit their data governance practices are insufficient. Many struggle to deliver important data to the right people within an acceptable timeframe. And there is also a depth of workforce skills required to sift through, analyse and develop insights from Big Data. The lack of the right skills to manage data effectively is among the top two challenges cited by CIO’s. While the most difficult process right now is reconciling disparate data sources.
  • Companies that are furthest along the data management competency continuum provide a useful model for how organisations will need to evolve if they are to extract and utilise valuable data-driven insights. Strategic data managers use data to first identify specific measurements and data points that align closely with corporate strategic goals. They select the most appropriate data to make decisions, and put a high percentage of the data they collect to use. They are also more likely to assign a C-level executive to manage data strategy, and they continue to explore emerging sources of data for potential value.

Of course, Big Data will be but one of the tools that companies use to inform decisions. But companies if they are going to effectively harness Big Data need to start with the end in mind and have a razor-sharp focus on outcomes. It also requires some unique people skills in terms of data management and the ability to harness data as a critical factor of production on a par with land, labour and capital.

To understand how we tackle the issue of Big Data in the world of Microsoft go to the Inside Info website by clicking  here and download the Big Data Info Sheet.

Uncover Hidden Container Objects on QlikView Reports

Qlikview is very much a dynamic tool that lends itself to adhoc queries by those using it.  However there’s still the need for well formatted reports that can be distributed to viewers who don’t have access to the document through the QlikView application.  A typical example of this is a Finance Balance Sheet that may then be bound up to a consolidated PDF file.  Qlikview provides for this through the use of Reports.

A Qlikview report allows a designer to layout multiple Qlikview objects, such as charts and textboxes, on a paper page that can then be printed.  The objects are not stored in the report, but are stored as virtual links to the objects in the document.

This is great for reflecting the dynamic requests of the user, but what if an object is hidden in the document? Or what if it is in an inactive tab of a container?  In both these cases, the object will also be hidden in the report.  Possibly not what you’d want, so how do you get around this?

At design time, copy the object from the container to a separate sheet, and ensure it is visible.  Hiding the sheet will then still allow it to be seen on the report without end users seeing a redundant copy of the object in the document.

What happens if you modify the original object in the container? If the copy outside the container is not a linked copy, it won’t get modified. If however it is a linked copy, it will be modified but it won’t be visible on a report if the container hides the original object.  A bit of a trade-off I know, but a way around this is to:

  1. Use a macro to unlink the two objects
  2. Print the report that refers to the unlinked version
  3. Relink the objects so any future modifications happen to both versions

Linked objects simply have the same ObjectID. Unlinking them only requires changing one of the ObjectIDs, while relinking them requires changing it back.  Note the report would have to always refer to the ObjectID of the unlinked object to ensure it’s never hidden.