Skip to content
Apr 30 15

A Universal Example

by Dave Davis

[Cross-posted from blog.davemdavis.net]

In my last post, What Does It Mean to Be Universal, I talked about Microsoft’s new Universal Application Platform (name may be changing to Universal Windows Platform), coming to Windows 10. This new application platform allows you to build one application and run it on different platforms.  In that post, I told you how Microsoft was doing that. In this post, I want to look at an example of what a single app running everywhere might look like.

Scenario

image

Let’s say you were building an application for a manufacturing company. They want to deploy IoT sensors running Windows 10, which would gather telemetry data from the machines on the factory floor.  This data would be transmitted to the cloud, where it would be analyzed and reported through machine learning (that’s out of scope for this post).  The data then can be accessed on Windows PCs, tablets, and phones. They also want to take advantage of augmented reality technology to give their plant manager real time access to the data while they are walking around the plant.

The Old

imageIn the past, you would probably create a project that looks something like this. You would have multiple projects for each head you want to support. Plus you would have some libraries for code that you want to share across each of the heads. After compiling, you would end up with an application for each head. Although not too complex, there is the potential for major code duplication. This is especially true when it comes to the interface and platform specific code (where API differ). There was less of an opportunity for reuse.

The New

imageUnder the new system, you can create one project, compile into one application, and have it run on all the devices. The APIs are pretty much the same.  When they do differ for a device, Microsoft has a way for that code to still live in the same project (see my last post for a sample). There may be times when you want to separate out code. For instance, for code that can be reused in projects that are not part of the new platform, server code comes to mind.  That should still be possible.  I say “should” because we don’t know for sure. We will have to see what comes out at Build next week, but hopefully, we will have more clarity on this.

Thoughts

So, one project or many projects?  This universal platform is going to make us rethink how we architect our solutions going forward.  Does it make sense to have everything in one project or do you want to keep things separated? I can definitely see creating utility projects that get reused between different solutions, but does it make sense to have a .dll for Services or Models?  You were previously able to replace those components just by swapping out dll’s, but now everything gets wrapped up into an apex package that is signed.  Swapping out dll’s would invalidate the package so a new build is needed. I’m interested in hearing what people have to say about the new universal platform.

Apr 14 15

Layout Awareness in Windows 10 UAP

by Dave Davis

[Cross-posted from blog.davemdavis.net]

There have been lots of changes with Windows of the past few years. A while back, Microsoft radically changed the way Windows worked with Windows 8. The goal was to start to converge the different operating systems into a single core. Windows 10 is the culmination of that convergence which started out as “Three Screens and a Cloud” and now includes many more screens.

image

The road to convergence is a difference story. Here, I want to talk about the user experience. While all this was going on, there was a revolution in the experiences that users expected. No longer are battleship gray user interfaces acceptable.  Neither are shrunk down versions of these interfaces acceptable on mobile devices.  As a developer, you need to design and build your applications for the form factors that you are targeting. Here is where we pick up that journey on the Microsoft stack.

The Past

In Windows 8, there were two environments: a desktop environment and a tablet environment.  With the tablet environment, they introduced a new programming language, WinRT.  Those tablet apps, sometimes referred to as “Metro” apps, could be running in one of four states: Filled, Full, Snap and Portrait. There was a neat little enum that helped support these states.  One of the mantras was good design. Your app should look good and respond well/do the right thing in any of those states.  To assist developers, the Visual Studio template for those projects included a LayoutAwarePage that assisted in handling the transition between states.  I wrote a blog about it: What is This ‘LayoutAwarePage’ You Speak Of.

Along comes Windows 8.1. That enum goes away and so does your LayoutAwarePage. In 8.1, apps could be resized horizontally independent of those defined states. There was another wrench thrown into the mix: the phone.  With this release, WinRT apps can be developed for the phone. The templates changed to include a new Universal App template.  There was guidance from Microsoft that you should target both table and phone but no tool build in the framework to help. At Build 2014, there was a pretty good session on how to target screens of any size, From 4 to 40 inches: Developing Windows Applications across Multiple Form Factors.  During this session, Peter Torr showed the science of view items on different size screens. He also showed a potential solution to the lack of built-in tooling. Since I wrote about the LayoutAwarePage in 8, I figured I had better write a post about where layout in went in 8.1: What Happened to My LayoutAwarePage?.

The Now

In Windows 10, Microsoft wants to target a larger array of device, including some that don’t even have screens. In the image below, there are a plethora of platforms that your apps can run on.  The convergence allows you to write code that runs on all these devices, but does that mean I have to write a separate UI for all these devices?

The answer is no. Taking a page from web design, the new guidance is to build adaptive interfaces.  Keep in mind that the experience should be tailored for each specific form factor.  Recently, a preview of the SDK for Windows 1o was released along with a Microsoft Virtual Academy training course. Module 9 caught my attention because it talks about building adaptive UI and it looks very similar to the solution that Peter Torr came up with for 8.1. In Windows 10, you can truly build one app (one app package) that will run on any Windows 10 device. There is some neat magic behind the sense that allows this to happen and if you are interested in that you should watch the other modules in the course. To support writing a single Xaml file that runs across devices, Microsoft has revamped Visual State Manager to assist with building adaptive UI.  I highly recommend that you take a look at that module if you are interested in the story.

I am sure that more details will come during Build 2015 (April 28-May 1) so keep an eye out for what coming down the pike. Keep in mind that everything is in preview right now so things may change between now and release but this looks promising.

Mar 23 15

The Apple Watch – designed to be wanted

by James Horgan

There has been a lot of press recently about the Apple Watch with divided opinions on whether it will be a success, with the biggest question, as yet, remaining unanswered: why do I need it? In reality, that is a question we have repeatedly asked ourselves with the majority of Apple Products.

Let’s remind ourselves of where design innovation started for Apple – the iMac in 1998

  imac    

 This iMac served a couple of purposes –

  • Remind consumers Apple is back
  • Be incredibly distinct in a crowded marketplace of beige and grey computers (note how that is very specifically called out in the advertisement above)
  • Change user perceptions of a computer as a ‘work only’ device.

The goal of this iMac was strategic – shock a marketplace filled with dreary solutions with something fresh and forward looking.

However the only thing that had changed was the hardware, the surface. Desirability of a well performing product was outweighing the ‘need’ for a higher priced computer. Similar conclusions could be made regarding the first iBook and Powerbook.

In 2000, Apple had another game changer in its new OS, OSX – a beautiful, truly graphical user interface that had some useful features attached but attracted users with its desirability to experience it over their ‘need’ for it. iTunes within OSX set the stage for the iPod in 2001.

osx

The iPod matched ‘I want it’ with ‘I need it’ by highlighting how Apple’s strategy was to drive down the cost of a song by allowing users to store more on their device:

ipodprices

And yes, that was the font Apple was using in 2001!

ipod_original

This consumer rationale coupled with an attractive and NEW way of interacting with a product created a major sweetspot for Apple – form meets function to create the killer invention and transform an industry.

This happened once again in 2007 with the introduction of the iPhone to a market saturated with grey business like models. Apple created a demand through sheer ‘I NEED THAT’ pitching of the product.

phonemodels

Remember where the smartphone market was in 2007 before the iPhone – most people were not using one, most didn’t think they needed one, and the current products were squarely aimed at business users. Most folks even balked at the price and waited patiently until they could afford one. But the iPhone fundamentally changed the way we experience email, calling and to some extent, texting. The first iPhone was also a lot more loudly designed than the current one, a way to shock awareness of it in an overstuffed market.

iphone1 iphone6

The iPad announcement in 2010 was crucial in its style – while sitting in an armchair on stage, Steve Job browses news, images, videos – to show how the iPad is both a  downtime and work effectiveness tool for the person on the go. By presenting the lifestyle of the executive enjoying a high end product, iPad sold out in droves, mainly to an older market with disposable income hoping to emulate the Apple brand.

ipadchair

So how does the watch fit within this strategy? The presentation is a little trickier, with no real envisioning of how you would use it. There is no ‘lifestyle’ pitch associated with the watch, making It harder for people to imagine them ‘needing’ a watch on a day to day basis. The watch needing to be tethered to a phone is also an issue, as is battery life. I can make calls sure, go for a run, great, but have to carry my phone to get accurate readings? That’s an issue.

So why would anyone buy one? Couple of reasons:

watch

1. It is gorgeously designed. Not many folks will need one but everyone WANTS to experience one. Watch envy will be the new paradigm. In the 50s and 60s, industrial design as a field of expertise was born out of the need to ensure consumers would buy more and more products. Because products lasted so long, industrial designers would use a technique called ‘inbuilt obsolescence’ to ensure consumers would always buy the latest model. This is certainly true of Apple’s product strategy – step 1: get people noticing, step 2: get them buying, and step 3: keep them buying by updating the design and engineering for each iteration.

2. Those comments about why would I need this? It’s too expensive? The battery life? Haven’t these been comments about every new product launch Apple has done?

3. The glances – this is key. A lot of people have commented they have not worn a watch in years, as they refer to their iphone for the time. But now you don’t have to take the phone out of your pocket anymore, or unlock>weather app> look at temperature to see what the weather is like, or know what stock prices are, or news headlines. The glances are paramount in returning user behavior back to simple, natural, gestures.

4. Choosing Christy Turlington, though a little dated as a reference, is a smart choice because she can project the lifestyle – the cross between luxury, fitness and family.

christy

One disconnect in the image is of someone who is running through Africa with an expensive watch! Beautiful and expensive products are very much counter to the conscientiousness of the digital saavy  and 1% adverse millennial. This is why the lower priced models will be a success but the Edition models will be a short fad – gauche display of your wealth is not a current consumer trend.

5. The digital crown could be perceived as unnecessary, but it’s a cool way to explore a new technology with an old metaphor. Think how quaint the iPod wheel looks now.

6. The haptic (touch) feedback is HUGE, bigger than you think. You can now communicate with a person through touch, remotely. Think about that. It’s like tapping someone on the shoulder to say hi without being in the same room. It opens up a new paradigm in experiential design – imagine a watch that taps you when you need to speed up or slow down in your run, or a watch that helps the visually impaired navigate through a city.

7. The ability to know when to speed up, change direction, communicate with others using almost a morse-code technique but in a highly personal way. I don’t think that concept is fully formed yet, but the idea of a touch feedback interface opens up a new area in user experience design.

8. Above all its modularity in its design allows Apple to span the whole watch market from sports to higher end without alienating customer segments (though the Edition may cross over that line). Remember this was essentially the Swatch strategy in the 80s.

swatches

Here’s the thing, the iPhone essentially replaced the need for a traditional watch. Now the Watch is looking to reopen that long forgotten market and that’s probably why its tethered to your phone – Apple doesn’t want the Watch to cannibalize the market, in the same way the iPhone 6, being a larger screen, is now taking market share from the iPad.

The one hurdle the watch has is that it does not obviously eliminate or replace an activity to make our lives more efficient – the MacBook replaced clunky hardware, the iPod replaced carrying CD players and needing to change a CD, the IPhone replaced photo albums, desktop email, and a host of other items. The iPad replaced print and arguably created the digital magazine market.

What the watch could replace is the wallet. That certainly is a powerful and compelling NEED – eliminating the wallet and the risk of losing it is definitely a next generation experience and including biometrics and personal identification into the watch is a natural next step in its evolution.

applepay

The other thing the Watch does replace is an ergonomic one – never having to take out your phone from your pocket for minor distractions. We will find out how our user adoption with this new product informs further iterations of the Watch.

The watch needs the iPhone to work, and that is a problem.  If I still need my iPhone to go for a run, play music, make calls, scan a boarding pass or access a hotel room then the Watch has yet to replace anything and this could be the Achilles heel in Apple’s strategy. Expect in future versions that tether to be cut. But remember our original reaction to the iPhone – the battery was terrible, the 2G network was a joke, but our desire to try the product got us over those objections. Its that overriding desirability for the Watch that will see more beneficial generations of this product to come.

Mar 22 15

The Well-Tempered AngularJS Web Part

by Bob German
A page from Bach's Fugue in Ab

A page from Bach’s Fugue in Ab

Like notes on a piano, web parts (or any kind of web widgets) are combined in new and unexpected ways on a page. Yet often they don’t play well together. Seemingly every example of an Angular Web Part posted on the web assumes it’s the only thing using Angular on the page. A second instance of the web part, or another web part that uses Angular, and they will clash in unpredictable ways. And what if an Angular master page comes along, or Microsoft decides to use Angular in a future version of SharePoint? The result will be a cacophony of script errors.

This might not be a problem in a SharePoint App where each web part runs on its own page in an IFrame, but it can cause real dissonance if web parts are running directly on a web page. This can happen in a Content Editor or Script Editor web part using Remote Provisioning, or a Visual Web Part in a farm or sandboxed solution.

There’s an easy solution to all this, and that is to start writing “well-tempered” web parts. About a page of well-composed JavaScript can mean the difference between solutions that work if you’re lucky and solutions that just work. This article dives into the details and includes a complete code listing, along with musical accompaniment. Please check it out, or send your developers.

Thanks!

Mar 16 15

Branding for Non-Designers (part 1)

by John Soares
As part of a continuing series, we’re going to take a look at the identity process – starting through the lens of our own re-branding, moving from this:
 The previous BlueMetal mark. Sported a binary code we'll discuss in a later post.
to this:
The final mark in its vertical lockup.
We’ll begin by talking about color. In the identity work this is normally in the middle of the process but is a critical and, in this case, conceptually central role. For ourselves, we had arrived at an agreed-upon version of the logo lockup (we’ll discuss that in a forthcoming post) in pure black:
 The agreed upon mark abstracted from color, as pure form.
Why work this way? To separate decisions of color from those of form. To be certain these issues inform one another, but generally this mitigates risk by compartmentalizing choices in an often contentious process and promoting directed, clear focus. We are clearly going to integrate blue, but what blue? And in what combinations?
It’s frequently helpful to survey the landscape, which can help identify major players (and competitors) who have brand equity in the space we intend to live. It also gives us a first hint of the range of qualities available from a single color:
 Competitors' marks in blue – from overly subtle to overly weighty.
Arranged from Top-Left to Bottom-Right in terms of color saturation and value.
Clearly there are a variety of tones and moods – the extremes on one end lack in weight and impact, and tend toward too heavy on the other. We also look to expressions of blue from a range of mediums:
Degas, Van Gogh, Rothko, Nintendo. Again, the range of possibilities by way of medium.
One of the larger goals of identity work is evoking an elusive, emotive and aspirational response. Researching Interior design, industrial design, fashion, the fine arts all help us to hone in on specific moods through more abstract means. We also get a sense of the mutability of color by way of medium – the cobalt colored-glass work of Dale Chihuly (bottom right) is a particular inspiration. The quality of his color is highly dependent on his materials, but the vibrance and intensity was an early signpost for the kind of essence we wanted to capture.
It is part of this research that leads us to Yves Klein.
 Yves Klein, circa 1961.
Klein was a pioneer of the french New Realism movement, as well as a leader of performance art, minimalism and Pop art. He famously painted monochromes (works in a single hue) following World War II but, frustrated with the misunderstanding with which his work was received, moved to focus on a single primary color: Blue.
Frustrated with problems of lightfastness and sustainable intensity, Klein struggled to find not simply a blue but the blue – one which contained the vibrance of the color idealized, that could simultaneously maintain its quality over time and exposure. After years of research, he found his solution in 1956. A combination of personally-developed chemical binding solution and a brilliant ultramarine pigment resulted in “the most perfect expression of blue,” a saturated, gently vibrant color whose effect on the eye was not unlike a double exposure. He sometimes referred to the effect as “a sensitized image,” “poetic energy,” or “pure energy.” His Blue Epoch followed, with applications on canvas, furniture, sculpture and eventually live performance. His process resulted in a color unofficially patented as IKB, or International Klein Blue.
from "Anthropometries of the Blue Epoch," Paris. Klein painted models, who then acted as living brushes on canvas.
 We use this as a starting point. It aligns with our brand notions of energy, dynamism and most especially velocity. Of course the specificity of the color chemistry means that we approximate this color for reproduction on screen and different paper stocks, but the important conceptual link is there, to combine with the storytelling of the mark itself.
From here we arrive at a single-color version of the mark, and then branch out to analagous colors in sequence (again, reinforcing our mark’s narrative of transformation). We experiment with placement of the central blue in relation to the secondary colors, as well as variations on color treatment of the logotype:
Mark-2
Mark-3
Mark-4
Color/sequence variations. These were exploratory.
We can see how the presence of surrounding colors impacts the perception of the original tone. Additionally, using Klein blue on the far right pushes us into a place where the leftmost color becomes by necessity too light. In our case, the mark speaks to process and methodology – each color should have the resonance to stand alone, conveying presence and impact. We arrive ultimately with IKB in the center, coordinating tones in the wordmark against the opposing panes.
Mark-Final
The finalized mark in its vertical lockup.
The result is dynamic, engaging and rich. Its simplicity belies its underlying story, but that story is one we can carry forward to our work and clients in a unified, coherent message.
Mar 16 15

Shirts Incoming!

by John Soares

IMG_0494

Feb 1 15

New Guidance from Microsoft for Packaging and Deploying SharePoint Solutions

by Bob German

Microsoft is cleaning house. Now that it has to maintain SharePoint for thousands of enterprises and millions of users in Office 365, Microsoft is working to clean up all the odd and messy bits of its flagship collaboration product. In a recent training course on Microsoft Virtual Academy, Microsoft urged developers to change the way they package and deploy their code in order to clean up a mess that has been building since 2003.

In this case the problem doesn’t really affect the customizations themselves (though most existing customizations are not cloud-ready); instead, the change is with the way custom solutions are installed into SharePoint and deployed in SharePoint sites. Instructors Vesa Juvonen and Steve Walker were careful to say they aren’t deprecating anything (at least not now) – but they admitted to some design shortcomings in SharePoint’s Feature framework and encouraged everyone to adopt a different approach.

The new approach eliminates a lot of problems that affect SharePoint upgrades and migration, and that can introduce quirky behavior and broken content if everything isn’t done perfectly. That’s the good news. The other news is that where the tools for the old approach are mature and familiar to SharePoint developers, there is virtually no tooling for the new one, just a collection of code samples at this point. So adopting the new model will be more costly until better tools are available.

This article will summarize the changes and analyze their impact on SharePoint developers and customers.

NoMoreFeaturesThe Big Change

In technical terms, Microsoft is recommending that developers stop using SharePoint’s Feature framework and list, web, and site templates in their solutions. The Feature framework was added in SharePoint 2007, and allows site administrators to activate “Features” that provision content such as site columns, content types, lists, files, web part definitions, and all sorts of other things in SharePoint. List, web, and site templates are similar, except that a whole list or site is created. All of this is enabled by an arcane set of XML schemas called CAML, or Collaborative Application Markup Language. Now, instead of defining SharePoint content in CAML, Microsoft wants everyone to start creating content programmatically using a pattern called remote provisioning.

Let’s face it, Features and Templates are flaky. Activate a feature and things “light up” in SharePoint; that’s the cool part. However when you deactivate a feature, the content it created might persist, go away, or just break. Versioning and updates are black art. If an admin forgets to deactivate a feature before uninstalling the code that supported it (and it might have been activated in thousands of sites), the feature is “orphaned” resulting in errors and upgrade headaches. And perhaps you’ve noticed that if you create a site from a template and then change the template, the site doesn’t pick up the change. Over time all these problems add up and users just blame SharePoint.

Microsoft has seen the error of their ways and wants developers to stop using CAML-based deployment and instead use a pattern called “remote provisioning” in which a remote process is used to create SharePoint content ranging from from sites to site columns. Actually this pattern isn’t new, it’s been available as long as there have been remote API’s to create content; it’s just that all the tooling and MSDN documentation pointed towards using Features and Templates instead. Here are some examples of remote provisioning:

  • .NET Code running in a Provider Hosted App using a client API (CSOM or REST) to create content in SharePoint. The Patterns and Practices team chose this for theirlarge collection of samples.
  • .NET code running in a console application using a client API (CSOM or REST)
  • Client-side calls made from PowerShell (here is a Codeplex project that may help)
  • Client-side calls (REST or JSOM) made from Javascript in a SharePoint Hosted App
  • The Mechanical Turk approach: a person manually creates content using a web browser

The remote provisioning advice has been coming from Microsoft since last summer, but the Virtual Academy training is by far the strongest in telling developers to stop using features and templates. The main focus of the course was on transitioning from full-trust “farm” solutions to cloud-ready approaches based on the “app model.” The instructors played fast and loose with the term “app model”, extending it to mean nearly any approach that runs code outside of SharePoint and avoids the Features and Template packaging. Developers would be well advised to watch the course in its entirety, and to dig into the Patterns and Practices wiki and Yammer group. The training includes many live demos and code walk-throughs on Remote Provisioning and the reasoning behind the changes.

WillFarmSolutionsBeDeprecatedFarm and Sandboxed Solution Roadmap Clarified

Existing SharePoint customers may be comforted that Microsoft reiterated its plans to continue to support farm solutions for the foreseeable future, but only on premises. The instructors offered detailed advice on developing farm solutions in order to avoid the problems with Features and Templates:

  • Provision content types and site columns in code rather than using Features. The big problem with this is that when content type or site column is created by a farm solution Feature, the definition is stored directly on web servers instead of in the content database. Thus if the feature is removed, or the content is connected to a farm that doesn’t have exactly the same solution and feature installed, all lists and libraries using the content types and site columns will break.
  • Avoid list templates. This is awkward advice because Microsoft introduced a new list template designer in Visual Studio 2013; clearly this change in direction is a very recent one. The problem with list templates is that they are dependent on a file called schema.xml which is stored on web servers; if the solution is removed, all lists based on the templates will stop working. Instead of using list templates, build out the list in code running in a feature receiver or use remote provisioning.
  • Avoid custom field types. This has been the advice for a while now; it’s too bad because custom field types were really cool (they allow you to create a new type of content in SharePoint).

What Microsoft is trying to do is remove problems in which SharePoint content is invalidated when it gets out of sync with a particular set of solutions installed on a SharePoint farm. These problems make it difficult or impossible to upgrade SharePoint, and lead to big challenges with disaster recovery, when content is restored to a new SharePoint farm from backup or in a DR replication scenario.

How many versions of SharePoint do you run?

When I speak at conferences I often ask audiences to raise their hands if they’re using more than one version of SharePoint; invariably the majority of hands go up. The reason is always the same: there is some kind of customization or ISV product that won’t survive the upgrade. The most conspicuous example of this was the Microsoft “Fab 40” web site templates for SharePoint 2007, some of which would not upgrade to SharePoint 2010; some customers still maintain a SharePoint 2007 farm just to run them. If Microsoft couldn’t get it right, what about the rest of us?

Maintaining more than one version of SharePoint is very expensive for enterprises; the extra SharePoint farms require extra hardware and a lot of extra maintenance work, much of it arcane knowledge of old and outdated technology. The worst part is that end users are constantly switching between versions making for an inconsistent user experience.

The vision is for SharePoint content to be self-contained and independent of custom and version-specific code that may be installed. Thus, a content database could be connected to a new SharePoint farm – even a new version of SharePoint – and it would just work. If Microsoft had figured this out ten years ago, you’d probably only be running one version of SharePoint right now.

These changes are a mea culpa from Microsoft; they’re admitting that it was too hard and they want to move to something better. But it’s painful for developers, who have spent years learning how to use Features and Templates, and who enjoy excellent tooling in Visual Studio. Switching to Remote Provisioning is a big step backward in productivity. Just remember that however painful it is to change the way we package and deploy our customizations, the goal is to ease a perennial pain with upgrading SharePoint.

The future of sandboxed solutions, however, is extremely doubtful at this point. You may recall that sandboxed solutions were officially deprecated in SharePoint 2013, but then Microsoft recanted and said that only the ability to run custom server code would be discontinued. In the class, one of the top recommendations is to avoid sandboxed solutions, not only the custom server code but sandboxed solutions of any kind. They pointed out problems with orphaned options that are left behind when sandboxed solution artifacts are retracted.

This is a little awkward, because Microsoft has been using sandboxed solutions in support of newly introduced features such as the Design Manager, a branding tool introduced in SharePoint 2013. Steve Walker took a hard line nonetheless,  and hinted that the sandbox would eventually be shut down once and for all. (Skip to 43:40 in the second video to hear it directly.)

Branding Guidance

During the Virtual Academy class, Microsoft provided quite a bit of branding guidance. With the exception of the new Office 365 themes, there wasn’t a lot new here, but the advice bears repeating because it once again relates to issues with SharePoint upgrades.

The traditional way to brand a SharePoint site is to change its master page, but master page changes generally do not survive SharePoint upgrades. This isn’t news; Microsoft changes the look and feel in every version of SharePoint, and master pages have needed a rewrite every time. (In many cases the old master page still works, but hides all the added functionality in the new version of SharePoint).

ThreeMastersThe problem is worse in Office 365, since new versions arrive more frequently. Microsoft has already changed the master page three times since 2013; if you had written a new master page, you would have missed the improved navigation and Office 365 app launcher.

The advice is to take as light of a touch as possible. Here are the options from lightest (and least flexible) to heaviest (the very flexible master page):

  1. Consider not branding your site. “You do not brand Outlook or Word, why do you need to do branding on collaboration sites?”
  2. Use Office 365 Themes. Changing the theme in one place will change it on every SharePoint site as well as in other Office 365 products such as Outlook Web Access and Delve. You can include a logo, URL for clicking the logo, background color, and colors for an Office 365 theme.
  3. SharePoint Themes. These affect only one SharePoint site, so the need to be changed in every site. This could be automated through a PowerShell script or custom code. You may find the SharePoint Color Palette Tool helpful in creating SharePoint themes.
  4. Alternate CSS. With this strategy, a developer builds a custom style sheet that is added to every page in SharePoint. Using this technique you can change colors and fonts, and to move things around on the page. Microsoft began allowing the alternate CSS to be set using the client API (CSOM) in March 2014 online, and in the April 2014 CU for SharePoint 2013. The Patterns and Practices group is working with the SharePoint engineering team to lock down some consistent element ID’s and classes that will not change across new versions of SharePoint so an alternate CSS file won’t break as SharePoint is upgraded.
  5. Custom Master Page. This allows major changes such as introducing responsive design or making the site “not look like SharePoint.” However there is an ongoing need to tweak or rewrite the master page as SharePoint upgrades occur.  This is especially problematic when master pages are installed into individual site collections, which is the only option in Office 365. If the master page is in an on-premises farm solution, it can be updated centrally, but if it’s placed into each site’s content then every site collection needs to be upgraded when changes occur.

Microsoft was clear that custom master pages are still supported. They called them a “tax” however; the responsibility of keeping the master page in sync with SharePoint belongs to the customer, not to Microsoft. In Office 365 where changes are ongoing and master pages are distributed, this will be an ongoing maintenance cost for the customer.

Relationship to the SharePoint and Office 365 App Models

The instructors used the term “app model” to mean nearly any approach that runs code outside of SharePoint and avoids the Features and Template packaging, however there is an important distinction between the App models and solutions that reside within a SharePoint site!

Apps run alongside of sites – they’re isolated because the code comes from a store and was written by who knows who, so they’re less trusted. All three app models (SharePoint Hosted Apps, Provider Hosted Apps, and Office 365 Apps) provide this isolation. The isolation can be limiting; it makes many scenarios such as SharePoint branding impossible to run in an App, and it means that App Parts run within IFrames, which bring their own set of challenges.

The twist here is that rather than running the customizations in an app, the app becomes an installer that places the customization into a SharePoint site where it can run unfettered. So rather than presenting an App part, an app could upload a .webpart file to the site’s web part gallery, and install JavaScript and other files that make the web part run directly within the SharePoint site.

This idea of using an App as an installer has been around for years actually; I used to call it “content injection” but now they call it “remote provisioning.” You probably won’t see this kind of app in the Office Store as it requires too much permission; it’s a pattern to use within an enterprise. Just keep in mind that you don’t need to use an app to do the installation; it could be a PowerShell script, a console app running in an Azure web job or Windows scheduled task, or really anything that is remote and provisions sites and content in SharePoint.

Don’t Panic

No, SharePoint is not dead, Apps are not dead, and the Earth will continue to spin on its axis for the foreseeable future.

If you build or use custom SharePoint solutions, you don’t have to change anything right now. The top-line advice from Microsoft was to move gradually to the app model and remote provisioning. But you should pay attention because the existing way of deploying content and customizations is really problematic, especially when SharePoint upgrades occur.

It’s probably OK to continue to use the old methods on existing projects; they could be converted later and the tooling is likely to improve over the next couple of years. Right now remote provisioning requires extra development work compared with the Feature framework, mainly due to the excellent tools in Visual Studio for building Features. So there’s a tradeoff between doing the extra work now and waiting for better tools to arrive. In any case, you should be aware of the new approach and try to favor it in any new customization projects.

(cross-posted to Bob German’s Vantage Point Blog)

Jan 23 15

Virtual Reality vs. Augmented Reality vs. Holograms

by Dave Davis

[Cross-posted from blog.davemdavis.net]

hololensOn January 21, 2015, Microsoft announced that the science fiction of holograms has become science fact.  They announced a new product, based on Windows 10, called HoloLens,  the first self contained, wearable computer that can create holograms.  This announcement has generated a buzz. If you haven’t seen the video Microsoft put out, take a minute, follow the link above and watch it, I’ll wait….. You’re back.  Were you blown away? I was. My mind was immediately racing to what problems I could solve if this truly pans out. More on that in a bit.

Virtual Reality

“Virtual Reality (VR), sometimes referred to as immersive multimedia, is a computer-simulated environment that can simulate physical presence in places in the real world or imagined worlds. Virtual reality can recreate sensory experiences, which include virtual taste, sight, smell, sound, touch, etc.” Wikipedia

When Microsoft announced HoloLens, some people mistakenly called it “virtual reality.” Although Microsoft showed immersive experiences, the fact that you can still see the world around you precludes it from being virtual reality.  A prime example of virtual reality is the Oculus Rift

Augmented Reality

“Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics or GPS data.” Wikipedia

HoloLens is really just augmented reality plus much more. I will go into what I mean in a bit. Augmented reality is not really new.  There are phone apps such as Yelp that use the phone’s camera to display the world around it while superimposing restaurant information based on the direction the phone is pointed.  There is also the translator app for Windows phone that superimposes translated text over written text, allowing you to switch between languages.

Another recent example is Google Glass (though they have suspended the program).  Google Glass is a pair of glasses that puts a heads up display on the lens, providing information to the wearer. That information is in a static location, no matter which direction the user is facing.

Hologram

“Holography is a technique which enables three-dimensional images (holograms) to be made. It involves the use of a laser, interference, diffraction, light intensity recording and suitable illumination of the recording. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three-dimensional.” Wikipedia

The HoloLens can create realistic three-dimensional images and place those images in the world around you.  So, I would say that HoloLens is a combination of all three concepts. Although it may not be truly creating holograms, they seem to be real enough to the wearer.

HoloLens

To be clear, I have not had an opportunity to try HoloLens.  Although I was not one of the chosen few who got to attend the event, the reactions from those who did get to try the canned demos were overwhelmingly positive.  Until I get to try it, I can only rely on what they have said and I am excited at the possibilities that this opens up.  I do have some questions as well. 

First, is Microsoft targeting consumer, or enterprise, or both? The thing that will really determine that is price. When the Xbox One first came out, it was $499 and adoption was slow. When they dropped the price to $350 this past holiday season, they sold like hotcakes. Granted, there is no direct competitor to HoloLens (as of yet), but if they price it too high, it may be just out of reach for the average consumer.

The next question I have is with the form factor itself.  If this is intended to be worn for long periods of time, it needs to be comfortable.  Google Glass was a pair of glasses, so they were easy to wear for long periods of time.  HoloLens will have way more functionality than Google Glass.  All that functionality requires some pretty heavy computing power. Microsoft has packed all of that computing power into a self-contained device, or “donut,” as my coworker likes to call it. Is v1 going to be too big or too bulky?  With all that computing power, what is the battery life going to be?

Finally, can Microsoft truly deliver on the experience they showed in the videos?  That will be the true test to the success of the device. Judging from the reaction of the reporters at the event, they are pretty close. Microsoft has gotten a lot of people excited with this announcement, a lot of people that have all but written them off. If they mess this up, they may drive those people away for good.

The Possibilities

A couple of years ago, Microsoft released a vision video that captured my imagination.  They showed off a lot of “imagined” technologies. They showed how technology will blend into the environment around you and become ingrained in everyday life.  Most of the stuff they showed was not real, but with HoloLens and Surface Hub, some of those use cases are now possible.

I am excited at the possibilities this opens up. Microsoft has said that HoloLens apps are just Universal Apps with some added APIs. Hopefully, they will release an SDK during their Build conference.  If you weren’t able to get in or can’t attend, they usually make the sessions available online soon after. 

The video that Microsoft released shows all kinds of use cases for HoloLens. I have a few of my own and I am excited to see what others come up with.

Summary

In recent years, these press events had very few surprises. Look for instance at the last Apple launch; there was nothing announced that had not previously leaked. Microsoft did a great job keeping the HoloLens a secret.  There were rumors of an Xbox gaming helmet, but this is so much more.  You can see pieces of these technologies in various Microsoft research projects. It is great to see them finally capitalizing on some of that research.  Only time will tell if HoloLens will be a success, but you have to admit that living in a time where holograms can be real is pretty cool.

Jan 14 15

Six SQL Server Resolutions for 2015

by Bill Lescher

As we embark on the 22nd year of everyone’s favorite RDBMS, I decided to create a tuple of SQL Server New Year’s resolutions.  Hopefully you can find some things in this list that ring true for you.

Test your database backups – When is the last time you successfully restored a production database backup file?  Ideally this is a regularly scheduled process.  Make sure your backups are good, and make sure you know what to do in the event of an emergency.  Do you have scripts ready to restore to a point in time if you had to?

Update your maintenance jobs – Are your databases being maintained properly?  If you haven’t looked under the hood of your database maintenance jobs lately, now is a good time to make sure your indexes, statistics and consistency checks are all squared away.  If you’re already using best-in-class scripts, like Ola Hallengren’s, double check that you have the latest version and are taking advantage of all the spectacular options available.

Implement a baseline – Do you know what your SQL Server looks like under normal conditions?  When someone complains that the system is “slow”, can you tell if something unusual is happening?  If not, it’s time to start collecting some metrics.  Create a simple database and one SQL Agent job with a handful of steps to capture the basics:  CPU Usage, Memory Usage, I/O, Index Usage, and Top Queries.  Keep an eye on the database size, and be sure to setup a purge process.

Study up on DMVs – I don’t know if there is anyone who has completely mastered the SQL Server system catalog.  I do know that there is always another gem of a diagnostic query out there just waiting for me to learn about.  My favorite authority on the subject is Glenn Berry.  His scripts are priceless.

Learn Extended Events – In a crunch it’s easiest to fall back on good old SQL Server Profiler, but you know it’s time to bite the bullet and learn how to use Extended Events.

Attend a user group meeting – If you’re not already doing so, get yourself out to a local PASS chapter meeting.  Even if you’re shy and/or well-versed in the topic being discussed, just sitting in a room with other database professionals can be inspiring.  It’s nice to be reminded that there are others out there with the same challenges you face.

There you have it.  With only 6 resolutions, you could procrastinate for 2 months on each task before you’re ready for the 2016 list.

What are your SQL Server resolutions for 2015?

Jan 8 15

Throw a life preserver to that corrupt PowerPivot model

by Gene Furibondo

Recently, I was in the throes of writing some deep, well thought out, frustratingly simple yet mind-numbingly complex DAX calculations. I had things just about where I wanted them and had started cleaning up my model a bit by doing some typical housekeeping (renaming, reordering, etc). I don’t know for sure if that did it but I am pretty sure. By “it”, I mean leave my model in a state where I could not modify anything leaving me in a mix of blind rage and baby-like tears. Here is the error message I received when trying to open PowerPivot.

I’ve posted the entire error message below for search sake but, long story short, my PowerPivot model was hosed. When I click ok to the above error message, I’m served with a blank ‘Grid’ PowerPivot canvas. Because you are smart, you’re thinking, try to switch to ‘Diagram’ view and make your changes there. Good idea. I was able to view my tables in ‘Diagram’ view. However, I could not extract the DAX calculations and any attempt to change anything in the model resulted in a never ending spinning icon.

I’d given up trying to recover the PowerPivot model in its entirety. If I could just get my hands on those sweet DAX calculations I had constructed, I could easily recreate the model itself. I tried EVERYTHING. I even tried opening the excel sheet in good ole notepad and extracting what I could out of there. I thought to myself, “There is no way this is going to work.” And I was right. It didn’t.

I was just about ready to give up when, what to my wondering eyes should appear, a related post about importing your PowerPivot model into an SSAS/Tabular instance. Need to give credit to Gerhard Brueckl for his write-up. Of course! If I could restore my broken down PowerPivot model into a new SSAS Tabular model, I may be able to save those captive DAX calculations. I fired up the trusty VM, and went to work. Open SSMS, connect to your Tabular instance of SSAS, right click on ‘Databases’ and select ‘Restore from PowerPivot’

It worked! I was able to restore into a SSAS Tabular model, then open that in SQL Data Tools where I could retrieve all of my DAX calculations. I have yet to figure out exactly what caused the corruption or if there is a cleaner way of fixing this but this worked for my purposes. I ended up recreating my model from scratch but most of the work (writing and testing those DAX calc’s) was already done.

Want another and perhaps more straightforward option? Open SQL Server Data Tools and use the handy wizard to create a new SSAS project using the ‘Import from PowerPivot’ type.

I sent this blog post around to people smarter than me for review and one particularly bright chap wrote back with tasty tidbit. After restoring your PowerPivot model into Tabular/SSAS, you can actually convert it back to an Excel PowerPivot file. Thus, completing the circle of life. There’s not a wizardy type interface to do this but this post walks through the steps pretty clearly. It’s a further testament to the fact that identical technologies are employed in both PowerPivot models in Excel and Tabular models in SSAS

 

 

 

============================

Error Message:

============================

 

An item with the same key has already been added.

 

============================

Call Stack:

============================

 

at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add)

at Microsoft.AnalysisServices.Common.LinguisticModeling.SynonymModel.AddSynonymCollection(Tuple`2 measure, SynonymCollection synonyms)

at Microsoft.AnalysisServices.Common.LinguisticModeling.LinguisticSchemaLoader.DeserializeSynonymModelFromSchema()

at Microsoft.AnalysisServices.Common.SandboxEditor.LoadLinguisticDesignerState()

at Microsoft.AnalysisServices.Common.SandboxEditor.set_Sandbox(DataModelingSandbox value)

at Microsoft.AnalysisServices.XLHost.Modeler.ClientWindow.RefreshClientWindow(String tableName)

 

============================