Skip to content
May 19 15

Three Tips to Being a Better Designer Who Codes

by Laura Cultrera

happy designer

There’s no question that in the fast moving and quickly-evolving world of web design, the more skills and versatility a designer has is valuable. Wearing many hats can be a challenge as the line between UX designer, web designer, and front-end developer all become blurred, but there are similar things to keep in mind among them all. The following suggested principles can be applied in all aspects of design, from the research and planning stage and all the way into the implementation.

1. Simplicity is Key

In his Ten Principles for Good Design, Dieter Rams pronounced “Good design is as little design as possible.” I find this to apply evenly in all aspects of design. Concentration on the essential aspects and filtering out the excess allows the purest form to shine through.

This can be interpreted in many ways in design. For the context of coding CSS, it’s finding the simplest lines of code that apply to the most elements that need it.  There is no need to be overly specific when defining code. Condense as much as you can and use shorthand properties when possible. Less is more.

You can also avoid redundancies and bloat by using an optimizer or redundancy checking tool like any of the ones mentioned here.

2. Follow a Logical Hierarchy

Whether you’re creating a high level site map, setting up the typography for a blog post design, or creating CSS for a website, hierarchy is important. It structures the information in a digestible way. We can assume in any of these situations that you will not be the only person to ever lay eyes on your work. Therefore, it’s incredibly important for other people to be able to follow your thought process.

When creating CSS files, I tend to follow the advice given here. Resetting the styles is applied first, followed by main body styling. Then, the font and heading structures are applied. This lays down the basic structure for the rest of the site styles. From there, we can apply CSS styles for specific sections, making sure all sections are clearly labeled through the comments. This brings us to the next point.

3. Keep It Clean and Clear

Using clean lines in a wireframe, creating legible and clear typography, and clean code should be paramount when designing. Just as you wouldn’t want a site to be jumbled and confusing, no one should have to dig through messy code. Clearly marking and commenting each section is important to maintaining a CSS file and it helps to mark each section with it’s specific ID or class in its commented section title. This allows another editor to search through the code more easily and is also helpful when working in any tool that uses layers. Clearly marking each layer and group saves the next user from a lot of confusion, especially in files that have multiple pages shown.

Final Thoughts

In this article, I focused mainly on design principles that apply to CSS coding as well. While these tips can improve your work, it’s also important to stay current on what is happening in each field. Joining a mailing list or adding a blog to an RSS feed can help you stay up to date on news, tools, and new techniques. Simple, logical, and clear design will certainly prevail wherever design takes us next.

May 18 15

The Importance of Collaboration

by Amanda Lasser

Your design team may be missing something that’s crucial for a good user experience, a solid end product, and happy clients.

Collaboration

Having worked in the industry for over 8 years, one thing I’ve learned to be true is that if your design and development teams don’t communicate, and communicate often on projects, in the end, it’s your project that could suffer.

Here’s how it typically goes: you get briefed on a project—sometimes the development team is present, sometimes not—and the design team works its magic to create a custom solution with all the bells and whistles they can think of, and after hours upon hours are spent on the visuals, the design team hands it off to development.

What’s wrong with this picture?

There’s a lack of communication between teams.

What should this process look like?

There should be communication among all teams touching the project from beginning to end, and in between every possible point.

If you’re not talking with each other, what may seem like a small change in the designer’s eye may have a huge impact on the project and its performance. In addition, a lack of communication can lead to designs that may not be possible to build, projects that come with avoidable complications, and wasted time on revisions. And to think this could have all been avoided if the designer had just included the developer from the beginning.

Another thing to consider is that lack of communication not only affects the developer, but it affects the client is as well. If the client signed off on designs before you showed them to the developer, it reflects poorly on the whole team when the developer isn’t able to execute the designs. When designers make decisions without including the developer, in the end, there may be greater consequences.

One last thing to keep in mind is that developers might suggest an idea that the designer may have never thought of or dismissed as impossible. Because many designers are not well versed in development, they may not know all the possible capabilities. By collaborating with the development team, they could possibly take your ideas and build upon them, taking them further than you could have ever imagined.

So, what are some effective tools and how can you communicate successfully with your team?

  • Always make sure someone from the development team is present during the kickoff.
  • It’s really important that everyone is speaking the same language. Be sure to go over any new terminology in the beginning if it’s specific to the project.
  • I’m a big fan of open concept offices; it lends itself to a more collaborative environment, but if that’s not an option, make sure your teams are sitting within an arms length of each other.
  • And finally, have team building outings and have them often. By this, I don’t mean going for happy hour! While that’s fun, the benefits of going to productive, educational, and collaborative workshops together reinforce excellence among teams and lead to a wealth of creativity and innovation on projects. BlueMetal NYC recently participated in a workshop facilitated by The Design Gym and the response was outstanding. Not only did everyone learn something, they had fun doing it.

Brian Krall sums it up nicely, “It is crucial for all roles to be looped in and to have meaningful access to each other, not only for decision-making, but for brainstorming and healthy debate.”1

May 13 15

Microsoft’s New Approach to Collaboration and Portals

by Bob German

At the Ignite conference last week, Microsoft laid out a new vision for collaboration and portals that is a major departure from the site-based approach that has been the core of SharePoint for more than a decade. Microsoft still fully supports SharePoint in its current form and will continue to do so, both on premises and in Office 365, even as it introduces Office 365 Groups and a new suite of Office 365 “NextGen” portals that could replace SharePoint sites for enterprises that want a more modern, cloud-based approach to collaboration. SharePoint sites will continue to work, but it’s unlikely Microsoft will invest in enhancing them beyond where they are in SharePoint 2013.

At the same conference, Microsoft announced a number of enhancements that will make it easier for people to transition to this new world in the cloud:

  • Azure running on premises using Azure Stack; this will allow enterprises to create a private cloud on premises in which workloads can easily move to the Azure cloud
  • New migration API’s to make it easier to move SharePoint content to SharePoint Online
  • A much more powerful hybrid search model that allows on-premises content to be indexed in the cloud, where it can feed search queries on both sides of the firewall as well as the Office Graph and Delve
The Office Graph

The Office Graph

At the center of all this is the Office Graph: a data layer that aims to unify the silos of collaboration that are present in nearly every enterprise. The Office Graph uses machine learning to find relationships between people and content in (eventually):

  • email in Outlook/Exchange
  • files in OneDrive for Business/SharePoint
  • notes in OneNote
  • calendars in Outlook/Exchange
  • conversations in Yammer
  • communications in Skype for Business

This article provides an overview of this emerging collaboration system, backed by links to the relevant talks at Ignite. The technologies described are still subject to change; please watch the videos directly for a more authoritative view.

What’s New for Users?

O365 Group ConversationOffice 365 Groups

Office 365 Groups were introduced last September as a way to connect collaboration silos into a single experience for team collaboration.

Each group consists of:O365 Group Files

  • An Azure AD Group that can be used to secure SharePoint Online content
  • A shared mailbox in Exchange Online
  • A shared calendar in Exchange Online
  • A document library in “OneDrive for Business” (it’s really a SharePoint Online site collection)
  • A OneNote notebook (stored in SharePoint Online)
  • Yammer groups (future)O365 Group CalendarO365 Group Notes
  • Skype for Business buddy lists (future)

For an end user, a Group’s shared assets are viewed in familiar applications like Outlook (the web version only for now), OneNote, and OneDrive for Business. Groups will also appear in Outlook 2016 and in mobile apps.

Groups will also appear in Delve, Office 365′s most direct view of the Office Graph. Groups are actually nodes in the Office Graph and will have “cards” and profiles in Delve just as people do today.

Microsoft announced a number of enhancements to Office 365 Groups for later in 2015, including:

  • eDiscovery of Groups content (this is possible today but requires running the existing eDiscovery tools over Exchange and SharePoint online assets; the idea here is to provide unified eDiscovery for Groups)
  • Guest membership with the ability to audit or re-attest guests
  • Dynamic membership based on Azure Active Directory attributes; for example, a Group could be created for everyone reporting to Katie Jordan or all users who have the title “Sales Executive”
  • Data Leakage Prevention across group files and mailbox
  • Quota management (Groups already use Office 365 storage quota at a tenant level, but currently there is no way to manage this)
  • Recycle bin across the whole group if it’s deleted

Microsoft says that Groups won’t replace SharePoint Team Sites and said that it might eventually be possible to include a team site as part of a Group. Already, it’s possible to create a team site in SharePoint Online and assign membership using the Azure AD Group to open it up to the same members.

For developers, the Office 365 Unified endpoint will provide a REST API for accessing groups and their associated mailboxes, files, and other assets.

For more information, see:

Nextgen Portals

Delve User Profile Page


Delve User Profile Page

Microsoft is producing a series of new, ready-to-go web experiences called the NextGen Portals. The first of these to ship was the Video Portal, which is now available in all Office 365 tenants.

Microsoft noticed commonalities in the portals their customers have built with SharePoint, and they aim to provide simpler, out-of-the box NextGen portals to fill these needs. This includes Delve, which becomes more of a people portal with its new User Profile that helps people find expertise in an organization. It also includes a Knowledge Management portal (below).

NextGen Portals are based on a new, responsive page rendering engine; some also employ a new browser-based authoring canvas. Another thing they share is a connection to the Office Graph, which selects portal content and also can send signals back to the Graph (such as what video you watched yesterday).

NextGen Portal Architecture

NextGen Portal Building Blocks

Portal content is stored in SharePoint site collections, but SharePoint “webs” as we know them aren’t involved. Instead, libraries for Pages, Images, Videos (backed by Azure Media Services) and settings are stored directly in the site collection. SharePoint’s existing REST API is used to access the storage. In general, each portal has a “hub” site collection to store application wide or landing page content, and then additional content site collections.NextGen Portal Architecture

NextGen Portal Architecture

NextGen portals run as a single page app and users can add controls such as a table of contents, video or image control to them. No web parts here, but perhaps the beginning of something similar.

There were mixed signals about custom NextGen portals; the presenters indicated they might open source the page rendering system to allow it to be used in custom NextGen portals or even on premises.

For more information, see:

Knowledge Management Features

Boards and Microsites

Office 365 Board

Office 365 Board

Office 365 Boards allow users to collect and share links to content across Office 365. Boards functionality is just starting to roll out to Office 365 tenancies now, where it can be found in Delve, where you can add one of Delve’s information cards to a Board. Over time, you’ll be able to add to a Board from “anywhere in Office 365.” Even on-premises content will be integrated when the forthcoming Cloud Search Service Application (see below) is enabled.

While Boards seem useful, anyone can just create any board they like; there’s no provision for governance or organization.

Article in a Microsite

Article in a Microsite

Microsites are simple publishing sites centered around one or more article pages and some links. The user experience is based on NextGen pages (see NextGen Portals, below). The file picker in the authoring canvas tries to predict what content you’ll want to link to. You may be able to pick files from a Board, or add an entire Board to a Microsite.

Knowledge Management Portal

The code name for the new KM portal is “InfoPedia,” and as it turns out, this isn’t the first InfoPedia Microsoft has built. The first one was created by Microsoft IT for internal use, so Microsoft knows what we’re up against when creating a KM portal.

Badges and Microsites are for end users who want to collect and share information; the KM portal is for content stewards who want to curate knowledge for an organization. It’s still a vision for now and Microsoft showed mock-ups rather than a live demo.

KM Portal Mock

KM Portal Mock

Content stewards can organize content (Microsites?) into sections; as with Badges and Microsites, content remains in place and “InfoPedia” points to it with a card system. A badging system is used to allow authoritative content to be marked; for example, the HR department might have the ability to badge content that is official HR policy, and other departments and organizations might have badges as well.

What’s New for IT Pros?

SharePoint 2016 on Premises

Microsoft let it be known that on-premises SharePoint is here to stay and that they will continue to ship new versions for the foreseeable future. That said, there wasn’t a whole lot new at Ignite for end users of SharePoint on premises. The Office Graph, a keystone of Microsoft’s new investments, isn’t available on premises (though it is possible to integrate on-prem assets with the cloud based Graph). Thus, new user experiences may be scant for these users. Microsoft did show an improved mobile experience for SharePoint 2016 users, but it seems to be about browsing lists and libraries and not a complete solution such as the NextGen portals offer.

However, IT Pros are likely to be happy with the SharePoint 2016 offering. The hardware requirements are similar to SharePoint 2013′s and upgrades can be accomplished via DB Attach or by using a 3rd party migration tool.

SharePoint 2016 will introduce a number of “MinRole” server roles to assist IT Pros in configuring a scalable and reliable farm. In SharePoint 2013, a server’s role is defined by what services are enabled; in 2016, servers can be designated as caching servers, web servers, search, application, or “specialized”. The SharePoint health analyzer is aware of these roles and will warn administrators if a service inappropriate to role is enabled; the exception here is that the “specialized” role can use any services (same as in SharePoint 2013).

A big promise for SharePoint 2016 is a new patching system, which will allow for smaller updates that require zero downtime to install. This technology was created for Office 365 and is being back-ported for SharePoint 2016. In addition, there will be a provision to create site collections more quickly by copying a master site collection in the database rather than activating “Features” (presumably, this is used in Office 365 to create the site collections that underlie NextGen portals, where there are no “Features” to activate).

For more information, see:

Hybrid Search

There were rumors before the conference of a “Hybrid Search Appliance” and during the conference, there were rumors that Delve and the Office Graph would move on premises. Neither are true (and Office Graph is probably too complex to ever be packaged for use in customer data centers). However, Microsoft did introduce a powerful new Cloud Search Application that will allow enterprises to create a seamless Search experience across Office 365 and their own data centers (with the index in Office 365) that includes the ability to include on-prem content in the cloud-based Delve.

Crawling with the Cloud Search Application

Crawling with the Cloud Search Application

Later this year, an update for SharePoint 2013 will provide a new Cloud Search Application for SharePoint; this will also be part of SharePoint 2013. This service can be set up like any Search Service Application, and search connectors, IFilters, and content sources can be set up as usual. The difference is that the SSA can be configured with information about your Office 365 tenancy; as content is crawled, the text and metadata are encrypted, batched, and sent to the Office 365 search index. So file shares, legacy SharePoint content, or anything that you can crawl on premises all the sudden becomes available in Office 365 search and the Office Graph. Since there is a single index, all results are relevancy ranked in a single result set, rather than the side-by-side federated results currently provided.

That’s not the end of the story, however. Queries are also handled by the new Cloud Search Application; the presenters even showed a SharePoint 2010 farm that had attached to the CSA and was searching Office 365 via its query service.

A single index and consistent search experience for on-prem and Office 365 content is a powerful offering, and one that will make it easier for customers to transition to the cloud.

Querying with the Cloud Search Application

Querying with the Cloud Search Application

What’s New for Developers?

Office 365 API’s

Microsoft CEO, Satya Nadella, said that “the most strategic developer surface area for us is Office 365.” So, developers will have plenty to work with as the new platform emerges.

Microsoft recently introduced a new set of API’s for access to content in Office 365. This allows developers to authenticate once to Office 365 in order to access a mix of services from Exchange, SharePoint, and other services. Currently, the support list is relatively limited to calendars, messages, and files in OneDrive for Business (SharePoint). The goal, however, is to introduce a comprehensive set of API’s that span the entire online service. The video portal, OneNote, and Yammer API’s are already available in preview, and eventually Office Graph, Tasks, and Lync/Skype will appear as part of this suite of unified REST API’s authorized via Azure AD.

In addition, Microsoft is continuing to develop SDK’s for iOS, Android, Java, Xamarin, Cordova and of course .NET.

Patterns and Practices

For the existing SharePoint world, Microsoft continues to pursue alternatives to the Feature Framework, Sandboxed Solutions, and Templates, and is encouraging developers to choose lighter weight branding approaches. For details, see this article, as well as these presentations:

Apr 30 15

A Universal Example

by Dave Davis

[Cross-posted from blog.davemdavis.net]

In my last post, What Does It Mean to Be Universal, I talked about Microsoft’s new Universal Application Platform (name may be changing to Universal Windows Platform), coming to Windows 10. This new application platform allows you to build one application and run it on different platforms.  In that post, I told you how Microsoft was doing that. In this post, I want to look at an example of what a single app running everywhere might look like.

Scenario

image

Let’s say you were building an application for a manufacturing company. They want to deploy IoT sensors running Windows 10, which would gather telemetry data from the machines on the factory floor.  This data would be transmitted to the cloud, where it would be analyzed and reported through machine learning (that’s out of scope for this post).  The data then can be accessed on Windows PCs, tablets, and phones. They also want to take advantage of augmented reality technology to give their plant manager real time access to the data while they are walking around the plant.

The Old

imageIn the past, you would probably create a project that looks something like this. You would have multiple projects for each head you want to support. Plus you would have some libraries for code that you want to share across each of the heads. After compiling, you would end up with an application for each head. Although not too complex, there is the potential for major code duplication. This is especially true when it comes to the interface and platform specific code (where API differ). There was less of an opportunity for reuse.

The New

imageUnder the new system, you can create one project, compile into one application, and have it run on all the devices. The APIs are pretty much the same.  When they do differ for a device, Microsoft has a way for that code to still live in the same project (see my last post for a sample). There may be times when you want to separate out code. For instance, for code that can be reused in projects that are not part of the new platform, server code comes to mind.  That should still be possible.  I say “should” because we don’t know for sure. We will have to see what comes out at Build next week, but hopefully, we will have more clarity on this.

Thoughts

So, one project or many projects?  This universal platform is going to make us rethink how we architect our solutions going forward.  Does it make sense to have everything in one project or do you want to keep things separated? I can definitely see creating utility projects that get reused between different solutions, but does it make sense to have a .dll for Services or Models?  You were previously able to replace those components just by swapping out dll’s, but now everything gets wrapped up into an apex package that is signed.  Swapping out dll’s would invalidate the package so a new build is needed. I’m interested in hearing what people have to say about the new universal platform.

Apr 14 15

Layout Awareness in Windows 10 UAP

by Dave Davis

[Cross-posted from blog.davemdavis.net]

There have been lots of changes with Windows of the past few years. A while back, Microsoft radically changed the way Windows worked with Windows 8. The goal was to start to converge the different operating systems into a single core. Windows 10 is the culmination of that convergence which started out as “Three Screens and a Cloud” and now includes many more screens.

image

The road to convergence is a difference story. Here, I want to talk about the user experience. While all this was going on, there was a revolution in the experiences that users expected. No longer are battleship gray user interfaces acceptable.  Neither are shrunk down versions of these interfaces acceptable on mobile devices.  As a developer, you need to design and build your applications for the form factors that you are targeting. Here is where we pick up that journey on the Microsoft stack.

The Past

In Windows 8, there were two environments: a desktop environment and a tablet environment.  With the tablet environment, they introduced a new programming language, WinRT.  Those tablet apps, sometimes referred to as “Metro” apps, could be running in one of four states: Filled, Full, Snap and Portrait. There was a neat little enum that helped support these states.  One of the mantras was good design. Your app should look good and respond well/do the right thing in any of those states.  To assist developers, the Visual Studio template for those projects included a LayoutAwarePage that assisted in handling the transition between states.  I wrote a blog about it: What is This ‘LayoutAwarePage’ You Speak Of.

Along comes Windows 8.1. That enum goes away and so does your LayoutAwarePage. In 8.1, apps could be resized horizontally independent of those defined states. There was another wrench thrown into the mix: the phone.  With this release, WinRT apps can be developed for the phone. The templates changed to include a new Universal App template.  There was guidance from Microsoft that you should target both table and phone but no tool build in the framework to help. At Build 2014, there was a pretty good session on how to target screens of any size, From 4 to 40 inches: Developing Windows Applications across Multiple Form Factors.  During this session, Peter Torr showed the science of view items on different size screens. He also showed a potential solution to the lack of built-in tooling. Since I wrote about the LayoutAwarePage in 8, I figured I had better write a post about where layout in went in 8.1: What Happened to My LayoutAwarePage?.

The Now

In Windows 10, Microsoft wants to target a larger array of device, including some that don’t even have screens. In the image below, there are a plethora of platforms that your apps can run on.  The convergence allows you to write code that runs on all these devices, but does that mean I have to write a separate UI for all these devices?

The answer is no. Taking a page from web design, the new guidance is to build adaptive interfaces.  Keep in mind that the experience should be tailored for each specific form factor.  Recently, a preview of the SDK for Windows 1o was released along with a Microsoft Virtual Academy training course. Module 9 caught my attention because it talks about building adaptive UI and it looks very similar to the solution that Peter Torr came up with for 8.1. In Windows 10, you can truly build one app (one app package) that will run on any Windows 10 device. There is some neat magic behind the sense that allows this to happen and if you are interested in that you should watch the other modules in the course. To support writing a single Xaml file that runs across devices, Microsoft has revamped Visual State Manager to assist with building adaptive UI.  I highly recommend that you take a look at that module if you are interested in the story.

I am sure that more details will come during Build 2015 (April 28-May 1) so keep an eye out for what coming down the pike. Keep in mind that everything is in preview right now so things may change between now and release but this looks promising.

Mar 23 15

The Apple Watch – designed to be wanted

by James Horgan

There has been a lot of press recently about the Apple Watch, with divided opinions on whether it will be a success and with the biggest question, as yet, remaining unanswered: why do I need it? In reality, that is a question we have repeatedly asked ourselves with the majority of Apple Products.

Let’s remind ourselves of where design innovation started for Apple – the iMac in 1998:

  imac    

 This iMac served a couple of purposes –

  • Remind consumers Apple is back
  • Be incredibly distinct in a crowded marketplace of beige and grey computers (note how that is very specifically called out in the advertisement above)
  • Change user perceptions of a computer as a ‘work only’ device.

The goal of this iMac was strategic – shock a marketplace filled with dreary solutions with something fresh and forward looking.

However, the only thing that had changed was the hardware, the surface. Desirability of a well performing product was outweighing the “need” for a higher priced computer. Similar conclusions could be made regarding the first iBook and Powerbook.

In 2000, Apple had another game changer in its new OS, OSX – a beautiful, truly graphical user interface that had some useful features attached but attracted users with its desirability to experience it over their “need” for it. iTunes within OSX set the stage for the iPod in 2001.

osx

The iPod matched “I want it” with “I need it” by highlighting how Apple’s strategy was to drive down the cost of a song by allowing users to store more on their device:

ipodprices

And yes, that was the font Apple was using in 2001!

ipod_original

This consumer rationale coupled with an attractive and NEW way of interacting with a product created a major sweetspot for Apple – form meets function to create the killer invention and transform an industry.

This happened once again in 2007 with the introduction of the iPhone to a market saturated with grey business like models. Apple created a demand through sheer “I NEED THAT” pitching of the product.

phonemodels

Remember where the smartphone market was in 2007 before the iPhone – most people were not using one, most didn’t think they needed one, and the current products were squarely aimed at business users. Most folks even balked at the price and waited patiently until they could afford one. But the iPhone fundamentally changed the way we experience email, calling and to some extent, texting. The first iPhone was also a lot more loudly designed than the current one, a way to shock awareness of it in an overstuffed market.

iphone1 iphone6

The iPad announcement in 2010 was crucial in its style – while sitting in an armchair on stage, Steve Job browses news, images, videos – to show how the iPad is both a downtime and work effectiveness tool for the person on the go. By presenting the lifestyle of the executive enjoying a high end product, iPad sold out in droves, mainly to an older market with disposable income hoping to emulate the Apple brand.

ipadchair

So, how does the watch fit within this strategy? The presentation is a little trickier, with no real envisioning of how you would use it. There is no “lifestyle” pitch associated with the watch, making it harder for people to imagine “needing” a watch on a day to day basis. The watch’s need to be tethered to a phone is also an issue, as is battery life. I can make calls, sure, go for a run, great, but I have to carry my phone to get accurate readings? That’s an issue.

So why would anyone buy one? A couple of reasons:

watch

1. It is gorgeously designed. Not many folks will need one but everyone WANTS to experience one. Watch envy will be the new paradigm. In the 50s and 60s, industrial design as a field of expertise was born out of the need to ensure consumers would buy more and more products. Because products lasted so long, industrial designers would use a technique called ‘inbuilt obsolescence’ to ensure consumers would always buy the latest model. This is certainly true of Apple’s product strategy – step 1: get people noticing, step 2: get them buying, and step 3: keep them buying by updating the design and engineering for each iteration.

2. Those comments about why would I need this? It’s too expensive? The battery life? Haven’t these been comments about every new product launch Apple has done?

3. The glances – this is key. A lot of people have commented they have not worn a watch in years, as they refer to their iPhone to check the time. But now you don’t have to take the phone out of your pocket anymore, or unlock>weather app> look at temperature to see what the weather is like, or know what stock prices are, or news headlines. The glances are paramount in returning user behavior back to simple, natural, gestures.

4. Choosing Christy Turlington, though a little dated as a reference, is a smart choice because she can project the lifestyle – the cross between luxury, fitness and family.

christy

One disconnect in the image is of someone who is running through Africa with an expensive watch! Beautiful and expensive products are very much counter to the conscientiousness of the digital saavy  and 1% adverse millennial. This is why the lower priced models will be a success, but the Edition models will be a short fad – gauche display of your wealth is not a current consumer trend.

5. The digital crown could be perceived as unnecessary, but it’s a cool way to explore a new technology with an old metaphor. Think how quaint the iPod wheel looks now.

6. The haptic (touch) feedback is HUGE, bigger than you think. You can now communicate with a person through touch, remotely. Think about that. It’s like tapping someone on the shoulder to say hi without being in the same room. It opens up a new paradigm in experiential design – imagine a watch that taps you when you need to speed up or slow down in your run, or a watch that helps the visually impaired navigate through a city.

7. The ability to know when to speed up, change direction, communicate with others using almost a morse-code technique but in a highly personal way. I don’t think that concept is fully formed yet, but the idea of a touch feedback interface opens up a new area in user experience design.

8. Above all, its modularity in its design allows Apple to span the whole watch market from sports to higher end without alienating customer segments (though the Edition may cross over that line). Remember, this was essentially the Swatch strategy in the 80s.

swatches

Here’s the thing, the iPhone essentially replaced the need for a traditional watch. Now, the Watch is looking to reopen that long forgotten market and that’s probably why it is tethered to your phone. Apple doesn’t want the Watch to cannibalize the market, in the same way the iPhone 6, being a larger screen, is now taking market share from the iPad.

The one hurdle the watch has is that it does not obviously eliminate or replace an activity to make our lives more efficient – the MacBook replaced clunky hardware, the iPod replaced carrying CD players and needing to change a CD, the IPhone replaced photo albums, desktop email, and a host of other items. The iPad replaced print and arguably created the digital magazine market.

What the watch could replace is the wallet. That certainly is a powerful and compelling NEED – eliminating the wallet and the risk of losing it is definitely a next generation experience and including biometrics and personal identification into the watch is a natural next step in its evolution.

applepay

The other thing the Watch does replace is an ergonomic one – never having to take out your phone from your pocket for minor distractions. We will find out how our user adoption with this new product informs further iterations of the Watch.

The watch needs the iPhone to work and that is a problem.  If I still need my iPhone to go for a run, play music, make calls, scan a boarding pass or access a hotel room, then the Watch has yet to replace anything and this could be the Achilles heel in Apple’s strategy. Expect in future versions that tether to be cut. But, remember our original reaction to the iPhone – the battery was terrible, the 2G network was a joke, but our desire to try the product got us over those objections. Its that overriding desirability for the Watch that will see more beneficial generations of this product to come.

Mar 22 15

The Well-Tempered AngularJS Web Part

by Bob German
A page from Bach's Fugue in Ab

A page from Bach’s Fugue in Ab

Like notes on a piano, web parts (or any kind of web widgets) are combined in new and unexpected ways on a page. Yet often they don’t play well together. Seemingly every example of an Angular Web Part posted on the web assumes it’s the only thing using Angular on the page. A second instance of the web part, or another web part that uses Angular, and they will clash in unpredictable ways. And what if an Angular master page comes along, or Microsoft decides to use Angular in a future version of SharePoint? The result will be a cacophony of script errors.

This might not be a problem in a SharePoint App where each web part runs on its own page in an IFrame, but it can cause real dissonance if web parts are running directly on a web page. This can happen in a Content Editor or Script Editor web part using Remote Provisioning, or a Visual Web Part in a farm or sandboxed solution.

There’s an easy solution to all this, and that is to start writing “well-tempered” web parts. About a page of well-composed JavaScript can mean the difference between solutions that work if you’re lucky and solutions that just work. This article dives into the details and includes a complete code listing, along with musical accompaniment. Please check it out, or send your developers.

Thanks!

Mar 16 15

Branding for Non-Designers (part 1)

by John Soares
As part of a continuing series, we’re going to take a look at the identity process – starting through the lens of our own re-branding, moving from this:
 The previous BlueMetal mark. Sported a binary code we'll discuss in a later post.
to this:
The final mark in its vertical lockup.
We’ll begin by talking about color. In the identity work this is normally in the middle of the process but is a critical and, in this case, conceptually central role. For ourselves, we had arrived at an agreed-upon version of the logo lockup (we’ll discuss that in a forthcoming post) in pure black:
 The agreed upon mark abstracted from color, as pure form.
Why work this way? To separate decisions of color from those of form. To be certain these issues inform one another, but generally this mitigates risk by compartmentalizing choices in an often contentious process and promoting directed, clear focus. We are clearly going to integrate blue, but what blue? And in what combinations?
It’s frequently helpful to survey the landscape, which can help identify major players (and competitors) who have brand equity in the space we intend to live. It also gives us a first hint of the range of qualities available from a single color:
 Competitors' marks in blue – from overly subtle to overly weighty.
Arranged from Top-Left to Bottom-Right in terms of color saturation and value.
Clearly there are a variety of tones and moods – the extremes on one end lack in weight and impact, and tend toward too heavy on the other. We also look to expressions of blue from a range of mediums:
Degas, Van Gogh, Rothko, Nintendo. Again, the range of possibilities by way of medium.
One of the larger goals of identity work is evoking an elusive, emotive and aspirational response. Researching Interior design, industrial design, fashion, the fine arts all help us to hone in on specific moods through more abstract means. We also get a sense of the mutability of color by way of medium – the cobalt colored-glass work of Dale Chihuly (bottom right) is a particular inspiration. The quality of his color is highly dependent on his materials, but the vibrance and intensity was an early signpost for the kind of essence we wanted to capture.
It is part of this research that leads us to Yves Klein.
 Yves Klein, circa 1961.
Klein was a pioneer of the french New Realism movement, as well as a leader of performance art, minimalism and Pop art. He famously painted monochromes (works in a single hue) following World War II but, frustrated with the misunderstanding with which his work was received, moved to focus on a single primary color: Blue.
Frustrated with problems of lightfastness and sustainable intensity, Klein struggled to find not simply a blue but the blue – one which contained the vibrance of the color idealized, that could simultaneously maintain its quality over time and exposure. After years of research, he found his solution in 1956. A combination of personally-developed chemical binding solution and a brilliant ultramarine pigment resulted in “the most perfect expression of blue,” a saturated, gently vibrant color whose effect on the eye was not unlike a double exposure. He sometimes referred to the effect as “a sensitized image,” “poetic energy,” or “pure energy.” His Blue Epoch followed, with applications on canvas, furniture, sculpture and eventually live performance. His process resulted in a color unofficially patented as IKB, or International Klein Blue.
from "Anthropometries of the Blue Epoch," Paris. Klein painted models, who then acted as living brushes on canvas.
 We use this as a starting point. It aligns with our brand notions of energy, dynamism and most especially velocity. Of course the specificity of the color chemistry means that we approximate this color for reproduction on screen and different paper stocks, but the important conceptual link is there, to combine with the storytelling of the mark itself.
From here we arrive at a single-color version of the mark, and then branch out to analagous colors in sequence (again, reinforcing our mark’s narrative of transformation). We experiment with placement of the central blue in relation to the secondary colors, as well as variations on color treatment of the logotype:
Mark-2
Mark-3
Mark-4
Color/sequence variations. These were exploratory.
We can see how the presence of surrounding colors impacts the perception of the original tone. Additionally, using Klein blue on the far right pushes us into a place where the leftmost color becomes by necessity too light. In our case, the mark speaks to process and methodology – each color should have the resonance to stand alone, conveying presence and impact. We arrive ultimately with IKB in the center, coordinating tones in the wordmark against the opposing panes.
Mark-Final
The finalized mark in its vertical lockup.
The result is dynamic, engaging and rich. Its simplicity belies its underlying story, but that story is one we can carry forward to our work and clients in a unified, coherent message.
Mar 16 15

Shirts Incoming!

by John Soares

IMG_0494

Feb 1 15

New Guidance from Microsoft for Packaging and Deploying SharePoint Solutions

by Bob German

Microsoft is cleaning house. Now that it has to maintain SharePoint for thousands of enterprises and millions of users in Office 365, Microsoft is working to clean up all the odd and messy bits of its flagship collaboration product. In a recent training course on Microsoft Virtual Academy, Microsoft urged developers to change the way they package and deploy their code in order to clean up a mess that has been building since 2003.

In this case the problem doesn’t really affect the customizations themselves (though most existing customizations are not cloud-ready); instead, the change is with the way custom solutions are installed into SharePoint and deployed in SharePoint sites. Instructors Vesa Juvonen and Steve Walker were careful to say they aren’t deprecating anything (at least not now) – but they admitted to some design shortcomings in SharePoint’s Feature framework and encouraged everyone to adopt a different approach.

The new approach eliminates a lot of problems that affect SharePoint upgrades and migration, and that can introduce quirky behavior and broken content if everything isn’t done perfectly. That’s the good news. The other news is that where the tools for the old approach are mature and familiar to SharePoint developers, there is virtually no tooling for the new one, just a collection of code samples at this point. So adopting the new model will be more costly until better tools are available.

This article will summarize the changes and analyze their impact on SharePoint developers and customers.

NoMoreFeaturesThe Big Change

In technical terms, Microsoft is recommending that developers stop using SharePoint’s Feature framework and list, web, and site templates in their solutions. The Feature framework was added in SharePoint 2007, and allows site administrators to activate “Features” that provision content such as site columns, content types, lists, files, web part definitions, and all sorts of other things in SharePoint. List, web, and site templates are similar, except that a whole list or site is created. All of this is enabled by an arcane set of XML schemas called CAML, or Collaborative Application Markup Language. Now, instead of defining SharePoint content in CAML, Microsoft wants everyone to start creating content programmatically using a pattern called remote provisioning.

Let’s face it, Features and Templates are flaky. Activate a feature and things “light up” in SharePoint; that’s the cool part. However when you deactivate a feature, the content it created might persist, go away, or just break. Versioning and updates are black art. If an admin forgets to deactivate a feature before uninstalling the code that supported it (and it might have been activated in thousands of sites), the feature is “orphaned” resulting in errors and upgrade headaches. And perhaps you’ve noticed that if you create a site from a template and then change the template, the site doesn’t pick up the change. Over time all these problems add up and users just blame SharePoint.

Microsoft has seen the error of their ways and wants developers to stop using CAML-based deployment and instead use a pattern called “remote provisioning” in which a remote process is used to create SharePoint content ranging from from sites to site columns. Actually this pattern isn’t new, it’s been available as long as there have been remote API’s to create content; it’s just that all the tooling and MSDN documentation pointed towards using Features and Templates instead. Here are some examples of remote provisioning:

  • .NET Code running in a Provider Hosted App using a client API (CSOM or REST) to create content in SharePoint. The Patterns and Practices team chose this for theirlarge collection of samples.
  • .NET code running in a console application using a client API (CSOM or REST)
  • Client-side calls made from PowerShell (here is a Codeplex project that may help)
  • Client-side calls (REST or JSOM) made from Javascript in a SharePoint Hosted App
  • The Mechanical Turk approach: a person manually creates content using a web browser

The remote provisioning advice has been coming from Microsoft since last summer, but the Virtual Academy training is by far the strongest in telling developers to stop using features and templates. The main focus of the course was on transitioning from full-trust “farm” solutions to cloud-ready approaches based on the “app model.” The instructors played fast and loose with the term “app model”, extending it to mean nearly any approach that runs code outside of SharePoint and avoids the Features and Template packaging. Developers would be well advised to watch the course in its entirety, and to dig into the Patterns and Practices wiki and Yammer group. The training includes many live demos and code walk-throughs on Remote Provisioning and the reasoning behind the changes.

WillFarmSolutionsBeDeprecatedFarm and Sandboxed Solution Roadmap Clarified

Existing SharePoint customers may be comforted that Microsoft reiterated its plans to continue to support farm solutions for the foreseeable future, but only on premises. The instructors offered detailed advice on developing farm solutions in order to avoid the problems with Features and Templates:

  • Provision content types and site columns in code rather than using Features. The big problem with this is that when content type or site column is created by a farm solution Feature, the definition is stored directly on web servers instead of in the content database. Thus if the feature is removed, or the content is connected to a farm that doesn’t have exactly the same solution and feature installed, all lists and libraries using the content types and site columns will break.
  • Avoid list templates. This is awkward advice because Microsoft introduced a new list template designer in Visual Studio 2013; clearly this change in direction is a very recent one. The problem with list templates is that they are dependent on a file called schema.xml which is stored on web servers; if the solution is removed, all lists based on the templates will stop working. Instead of using list templates, build out the list in code running in a feature receiver or use remote provisioning.
  • Avoid custom field types. This has been the advice for a while now; it’s too bad because custom field types were really cool (they allow you to create a new type of content in SharePoint).

What Microsoft is trying to do is remove problems in which SharePoint content is invalidated when it gets out of sync with a particular set of solutions installed on a SharePoint farm. These problems make it difficult or impossible to upgrade SharePoint, and lead to big challenges with disaster recovery, when content is restored to a new SharePoint farm from backup or in a DR replication scenario.

How many versions of SharePoint do you run?

When I speak at conferences I often ask audiences to raise their hands if they’re using more than one version of SharePoint; invariably the majority of hands go up. The reason is always the same: there is some kind of customization or ISV product that won’t survive the upgrade. The most conspicuous example of this was the Microsoft “Fab 40” web site templates for SharePoint 2007, some of which would not upgrade to SharePoint 2010; some customers still maintain a SharePoint 2007 farm just to run them. If Microsoft couldn’t get it right, what about the rest of us?

Maintaining more than one version of SharePoint is very expensive for enterprises; the extra SharePoint farms require extra hardware and a lot of extra maintenance work, much of it arcane knowledge of old and outdated technology. The worst part is that end users are constantly switching between versions making for an inconsistent user experience.

The vision is for SharePoint content to be self-contained and independent of custom and version-specific code that may be installed. Thus, a content database could be connected to a new SharePoint farm – even a new version of SharePoint – and it would just work. If Microsoft had figured this out ten years ago, you’d probably only be running one version of SharePoint right now.

These changes are a mea culpa from Microsoft; they’re admitting that it was too hard and they want to move to something better. But it’s painful for developers, who have spent years learning how to use Features and Templates, and who enjoy excellent tooling in Visual Studio. Switching to Remote Provisioning is a big step backward in productivity. Just remember that however painful it is to change the way we package and deploy our customizations, the goal is to ease a perennial pain with upgrading SharePoint.

The future of sandboxed solutions, however, is extremely doubtful at this point. You may recall that sandboxed solutions were officially deprecated in SharePoint 2013, but then Microsoft recanted and said that only the ability to run custom server code would be discontinued. In the class, one of the top recommendations is to avoid sandboxed solutions, not only the custom server code but sandboxed solutions of any kind. They pointed out problems with orphaned options that are left behind when sandboxed solution artifacts are retracted.

This is a little awkward, because Microsoft has been using sandboxed solutions in support of newly introduced features such as the Design Manager, a branding tool introduced in SharePoint 2013. Steve Walker took a hard line nonetheless,  and hinted that the sandbox would eventually be shut down once and for all. (Skip to 43:40 in the second video to hear it directly.)

Branding Guidance

During the Virtual Academy class, Microsoft provided quite a bit of branding guidance. With the exception of the new Office 365 themes, there wasn’t a lot new here, but the advice bears repeating because it once again relates to issues with SharePoint upgrades.

The traditional way to brand a SharePoint site is to change its master page, but master page changes generally do not survive SharePoint upgrades. This isn’t news; Microsoft changes the look and feel in every version of SharePoint, and master pages have needed a rewrite every time. (In many cases the old master page still works, but hides all the added functionality in the new version of SharePoint).

ThreeMastersThe problem is worse in Office 365, since new versions arrive more frequently. Microsoft has already changed the master page three times since 2013; if you had written a new master page, you would have missed the improved navigation and Office 365 app launcher.

The advice is to take as light of a touch as possible. Here are the options from lightest (and least flexible) to heaviest (the very flexible master page):

  1. Consider not branding your site. “You do not brand Outlook or Word, why do you need to do branding on collaboration sites?”
  2. Use Office 365 Themes. Changing the theme in one place will change it on every SharePoint site as well as in other Office 365 products such as Outlook Web Access and Delve. You can include a logo, URL for clicking the logo, background color, and colors for an Office 365 theme.
  3. SharePoint Themes. These affect only one SharePoint site, so the need to be changed in every site. This could be automated through a PowerShell script or custom code. You may find the SharePoint Color Palette Tool helpful in creating SharePoint themes.
  4. Alternate CSS. With this strategy, a developer builds a custom style sheet that is added to every page in SharePoint. Using this technique you can change colors and fonts, and to move things around on the page. Microsoft began allowing the alternate CSS to be set using the client API (CSOM) in March 2014 online, and in the April 2014 CU for SharePoint 2013. The Patterns and Practices group is working with the SharePoint engineering team to lock down some consistent element ID’s and classes that will not change across new versions of SharePoint so an alternate CSS file won’t break as SharePoint is upgraded.
  5. Custom Master Page. This allows major changes such as introducing responsive design or making the site “not look like SharePoint.” However there is an ongoing need to tweak or rewrite the master page as SharePoint upgrades occur.  This is especially problematic when master pages are installed into individual site collections, which is the only option in Office 365. If the master page is in an on-premises farm solution, it can be updated centrally, but if it’s placed into each site’s content then every site collection needs to be upgraded when changes occur.

Microsoft was clear that custom master pages are still supported. They called them a “tax” however; the responsibility of keeping the master page in sync with SharePoint belongs to the customer, not to Microsoft. In Office 365 where changes are ongoing and master pages are distributed, this will be an ongoing maintenance cost for the customer.

Relationship to the SharePoint and Office 365 App Models

The instructors used the term “app model” to mean nearly any approach that runs code outside of SharePoint and avoids the Features and Template packaging, however there is an important distinction between the App models and solutions that reside within a SharePoint site!

Apps run alongside of sites – they’re isolated because the code comes from a store and was written by who knows who, so they’re less trusted. All three app models (SharePoint Hosted Apps, Provider Hosted Apps, and Office 365 Apps) provide this isolation. The isolation can be limiting; it makes many scenarios such as SharePoint branding impossible to run in an App, and it means that App Parts run within IFrames, which bring their own set of challenges.

The twist here is that rather than running the customizations in an app, the app becomes an installer that places the customization into a SharePoint site where it can run unfettered. So rather than presenting an App part, an app could upload a .webpart file to the site’s web part gallery, and install JavaScript and other files that make the web part run directly within the SharePoint site.

This idea of using an App as an installer has been around for years actually; I used to call it “content injection” but now they call it “remote provisioning.” You probably won’t see this kind of app in the Office Store as it requires too much permission; it’s a pattern to use within an enterprise. Just keep in mind that you don’t need to use an app to do the installation; it could be a PowerShell script, a console app running in an Azure web job or Windows scheduled task, or really anything that is remote and provisions sites and content in SharePoint.

Don’t Panic

No, SharePoint is not dead, Apps are not dead, and the Earth will continue to spin on its axis for the foreseeable future.

If you build or use custom SharePoint solutions, you don’t have to change anything right now. The top-line advice from Microsoft was to move gradually to the app model and remote provisioning. But you should pay attention because the existing way of deploying content and customizations is really problematic, especially when SharePoint upgrades occur.

It’s probably OK to continue to use the old methods on existing projects; they could be converted later and the tooling is likely to improve over the next couple of years. Right now remote provisioning requires extra development work compared with the Feature framework, mainly due to the excellent tools in Visual Studio for building Features. So there’s a tradeoff between doing the extra work now and waiting for better tools to arrive. In any case, you should be aware of the new approach and try to favor it in any new customization projects.

(cross-posted to Bob German’s Vantage Point Blog)