Jonathan Ralton will present “Taming Your Taxonomy” at the first-ever SharePoint Saturday Rhode Island on November 9, 2013. The event will be held at the New Horizons office in Providence, and is free to the public.
“SharePoint offers extensive opportunity for flexibility in the storage and retrieval of your information and documents. Whether you are planning for a small team-based collaboration site or a department-wide portal, the value of taking the time to chart your course before you start diving into site settings and configuring views on your libraries and lists is inarguable. SharePoint offers you an arsenal of constructs to tame your disorganized data: lists, libraries, columns, site columns, content types, enterprise content types, managed metadata… Thinking these through properly at the outset will help you craft a solid foundation to build upon now and in the future. The additional capabilities introduced in SharePoint 2013 such as extended managed metadata give you even more options to consider in crafting your taxonomy. This session assumes a basic end user knowledge of site structures, list and library behavior, metadata, and navigation in SharePoint.”
The weekend of October 19-20th marked the Head of the Charles Regatta in Cambridge MA – a lovely Fall weekend, complete with Fall color, 60-degree temperatures and loads of sunshine. At least that’s what I hear from my family. I was enjoying the less high-brow, but nevertheless edifying atmosphere of Boston Code Camp 20. Okay, I did bicycle there, so that’s something. Fall is great biking weather.
I attended 6 sessions at the code camp – all of them useful in their own way – but two of the sessions unexpectedly hooked together and stood out:
• Practical Azure with Bill Wilder
• Securing ASP.NET WebAPI Services with Brock Allen
Azure and Claims-based Authentication
Why did these two presentations sync? Well, stay tuned. I don’t want to give away the exciting conclusion. Bill’s “Practical Azure” presentation was first, and he managed to pull off a daring presentation stunt: he built an app on the fly and it actually worked (okay, mostly). In a nutshell he did the following:
• He logged into his Azure account
• Set up a remote Active Directory
• He added a user to this new domain
• He then opened Visual Studio 2013, started a new Web Application Project, and in the course of the Project Wizard, changed the Authentication model to Cloud-based “organizational accounts”.
• After the project was created, he ran it and logged into his new Active Directory.
Did you get that? He simply ran the app and logged in using an Active Directory store in the cloud. It was that easy. It just worked. And then he made a side comment, “If you’re not using Claims Based Authentication, you’re doing things the hard way.”
Clearly. Well, this comment sort of stuck with me because I haven’t been using Claims based authentication. In fact, I’ve been mostly enamored with the SimpleMembership framework and MVC OAuth integration. Maybe the Regatta crowd already had this down, but I didn’t. So I made a mental note to get back to this. Bill showed off a couple more Azure bells and whistles – web hosting, virtual machine creation and then we broke for the next session.
Securing WebAPI Services and the New ASP.Net Identity
And that brings me to Brock Allen and “Securing ASP.NET WebAPI Services”. But let me first give a little background. The MVC declarative security model was a giant leap forward from Web Forms security. For the first time, it was possible to secure web apps declaratively, at a high level, preventing security holes in the application code. The trouble with the model, however, was always trying to shoe horn the clunky Membership Provider into whatever security requirements the application had. Well, it just wasn’t possible in many scenarios. In the clinical landscape, for example, user roles had to change based upon which study they were viewing. The out-of-the-box provider is more or less useless when roles must change on the fly.
So some of us got really good at understanding the IIS pipleline and the Authentication event. Swapping your own IPrincipal object was a great way to customize security and still leverage the declarative security model. I thought I had this down pretty well, but Brock is clearly on another level.
Brock is a fast talker. You dare not blink once he starts. His focus was on securing Web API code – with the unstated but immediately clear objective of doing so under various hosting circumstances. Let me explain. When you’re in IIS, you have the integrated pipleline mentioned above to hook into with authentication handlers. By contrast, when you are self-hosting, the security problem gets much more complicated. So the latest .Net Framework release, 4.5.1, provides two helpful abstractions:
• The Katana project and OWIN Authentication
It’s the latter of these two abstractions that provides the means of hooking into the authentication event for self-hosted apps and injecting a custom IPrincipal. As Brock explains, the magic in OWIN is that one can build authentication modules that do not specifically target IIS, thereby making the authentication code portable across the infrastructure. And the code to do so looks very much like the familiar HTTPHandlers of yore. So it didn’t feel like too much of a stretch. Brock did remark, however, that for many situations, the IIS pipeline is sufficient, and leveraging the OWIN architecture may very well be over-engineering the solution. I don’t have an immediate need to leverage OWIN and an abstracted authentication handler; however, it is good to know it is there. If nothing else, SignalR is hot and securing it properly outside a web server requires knowledge of OWIN.
Putting it Together: Asp.Net Identity and Claims
At this point, I’ve got two loose threads in this discussion: Asp.Net Identity and that gnawing quote from Bill Wilder regarding claims-based authentication. Let’s see if we can tie these threads up, or, rather, let’s see if we can leverage some expert guidance to tie them up.
Brock promised a blog post that would dive a bit deeper into .Net 4.5.1 security changes, and he published the post the next day. It’s a great read, and, frankly, it does a superb job of explaining Asp.net Identity and its relationship to claims-based authentication. In a nutshell, Asp.NET Identity is the new security framework, to replace SimpleMembership. To be clear, the line of succession here is:
Membership → SimpleMembership → Asp.Net Identity
Asp.Net Identity ditches the provider model. The interface is extensible and makes customizations easier and more maintainable. The plumbing to the underlying user store is implemented in a UserManager, which itself is built on top of the Entity Framework. It looks like a great improvement (though Brock is tepid on it, preferring his own MembershipReboot framework).
So claims-based authentication is not in contrast to the Asp.Net Identity, but, rather complements Asp.Net Identity. You can leverage IIS and Visual Studio 2013 to set up claims authentication for you (think Azure Active Directory). Or, if you have a need for both claims-based and internal user store authentication (think SQL / Oracle / MySQL), you can write a handler to accept claims-based token and swap IPrincipal early in the IIS / OWIN pipeline. If there is no claim token early in the pipeline, the user proceeds down the usual path, ending up at a login form and SQL Server or OAuth-based authentication.
I missed the regatta, but I think I may be better for it. What was great about both of these presentations is that they exposed some really powerful features of ASP.Net 4.5.1, Visual Studio 2013 and Azure. They were both powerful, informative presentations that intrigued me enough to follow up on Monday. I left the camp with more questions than answers, but doing a little homework sewed things up. My tool belt feels a little heavier now. Also, there was enough of the crisp Fall day left to chase the sunset home. I managed a stop at the Wine and Cheese Cask for Dale’s Pale Ale . Dale’s comes in bike-friendly cans.
About Brett Miller
Brett Miller is a Senior Software Engineer at BlueMetal Architects. You can find out more about him on LinkedIn.
In our second post, we will discuss, with a concrete example, how one brand can maintain its values when applying to a digital experience that may not be within their purview.
A good example is a travel brand. Let’s take the website hipmunk.com
Hipmunk states that it is ‘The fastest, easiest way to plan travel’ and its user experience certainly delivers on that brand promise to its customers.
Planning a flight on the website is done in a way that makes sense to REAL people. Fares can be listed by price, time, departure, arrival etc, but also by ‘agony’ – in other words they understand that customers are looking for the perfect balance between keeping costs down and a flight that is not painful.
These results are presented in a ‘schedule’ format that allows users to understand when they depart/arrive etc. which allows them to visually compare the right choice for them and select rapidly.
It’s these little details in the experience that fulfill on its brand promise. The overall look of the site is clean, functional and friendly.
Similarly when booking hotels, hipmunk presents results and allows users to filter by the type of hotel you’re looking for, which is unusual (luxury, romantic, business, kid-friendly etc).
So how does hipmunk translate this to mobile mobile experience?
When using the hipmunk app it does NOT feel like an iphone app. It feels very much as if you are in hipmunk’s world – the mascot takes center stage and animates in a cute way when loading results, and the button borders and fonts are consistent with the website. Nothing has been sacrificed in the experience that takes away from the brand.
A nifty feature that makes more sense on a mobile device is the hotels heatmap overlay feature, which allows you to know which hotel is closest to the things you want (food, entertainment etc). It’s a simple easy way to navigate complex choices, leveraging the location-aware functionality of the device.
But hipmunk has the advantage of being designed in the digital era so translating its brand is substantially easier.
What about companies who existed before digital, how do they fare?
GE is a good example of what can go wrong when brand isn’t part of a digital strategy. GE has many applications across different lines of business. Because these apps are often built in house, creative license is taken with the look and feel, or because (practically) its cumbersome to change the look and feel, teams will often reuse existing pieces from other apps.
This leads to a situation where a company with a powerful distinctive brand on its website, ge.com, has a heavily diluted and inconsistent experience across its suite of apps.
How could we rectify this problem?
The first step is to look at the website. A company’s website is still its goto brand statement. We study it from two perspectives:
How do we flow through the website?
What feelings does the website inspire?
First, the website is extremely simple to navigate. Simple sounds dismissive but here it refers to GE’s ability to distill the complex into something that is easy to understand.
A couple of things contribute to the ‘feel’: the statements of GE’s returns in numbers indicates GE’s priority is ‘less talk, more results’. There is a sense that GE’s work and brand is about benefiting humankind through innovation and imagination (reflected in its tagline ‘imagination at work’)
This sense is also paired in the simple, refreshing approach to its visuals and font choice. There is a lot of white negative space which indicates a clean, focused brand. The blue against white indicates clouds and sky which couples well with imagination.
So how could this design be brought into one of their apps?
Let’s compare two of them:
You can see on the right the use of black, gradients, an overall heavy brand. Also a significant lack of imagery detrimentally takes away from the feeling of ‘imagination at work’.
The one on the right is better, especially in its use of fonts and negative white space in its areas of content but to improve it, we would have recommended:
Changing both the top and bottom to a flat blue with white text. The use of gradients and beveled buttons was an Apple standby, and a company’s digital experience should not be limited to the OS on which it is built.
Avoiding the back button which hinders so many iPhone apps. A flexible UI where the user can jump from one page to the other is more intuitive and creates a better experience for the user. When users are locked into a next page to next page approach it limits the experience to the Apple experience, and it not a truly branded GE experience.
Finally one word about organizing your organization for digital brand cohesion – your branding team MUST have the final say on any experiences that are to be put in front of a customer. Any touchpoint, regardless how small, is a representation of your brand, and to have consistency in experience, tone, visual identity and feel you must have a brand guardian that can build your brand equity in the marketplace.
In our next and final post about brand and digital, we will look at choosing the right operation system to maintain and evolve your brand as effectively as possible.
Effective leadership in any environment is tricky business. Exerting leadership in a technical environment such as software development poses additional challenges of its own. Often, it is tacitly assumed by project stakeholders that the architect will act as the tech lead. In fact, crafting a solid technical vision of what is to be built, and leading the effort to successfully build it, are two very different skillsets. They are closely related, of course, but there is no guarantee that the same person possesses both in sufficient quantity.
The sheer complexity of building large distributed systems in modern, on premise and cloud computing environments is one special factor inherent in software. The other constant is change. Change occurs at a relentless and increasing pace in software. As Jack Greenfield describes at length in Software Factories, complexity and change are almost inescapable. They are two heads of the same monster that must be constantly battled, and this monster seems to be getting bigger and faster all the time.
To win the battle, a good technical leader has to find a way to effectively use all the skills, experience, and mental horsepower available within the team. I choose the term ‘available’ with care because choosing or hiring members of a team is a luxury seldom accorded to the architect. Far more often, your success will depend upon getting the best possible performance from a group that is already assigned to the project. Turning a disparate group of people into an effective team, capable of coping with high complexity and rapid change, is no mean feat.
The approaches for meeting these challenges sometimes tend toward extremes. On one side of the spectrum, you find the micromanagement approach. On the other side, you find what I will call the “grand vision” approach. Both extremes risk epic failure. To see why, let’s take micromanagement first and do a little thought experiment.
Imagine, if you will, the micromanagement approach to getting someone to make a cup of coffee for you. Plan it to the nth degree. Give a written description in nauseating detail and insist that it be read. Then, give a live demonstration of exactly how you would like your coffee to be made. (We can ignore the fact that your designated coffeemaker will probably spend most of this time in silent contempt, devising ways to subvert you by blindly following any mistakes.) Now imagine a new coffee machine is installed a moment after your demonstration is finished; one which neither you, nor your coffeemaker, has ever seen before. Aside from the negative human dynamic, micromanagement creates maximum fragility in the face of changing technology because it fails to empower individuals to solve problems when and inevitable curveballs and complexities crop up.
Now let’s vary our experiment slightly to see what’s wrong with the grand vision approach at the opposite extreme. This time, say to your would-be coffeemaker, take all the time and money you need, but make me the best cup of coffee in the world. Even with unlimited resources, this vision is not actionable because it lacks sufficient definition of what is to be achieved, of what defines “the best”. Not even a master coffee brewer with a lifetime of knowledge and experience could satisfy this request. As with micromanagement, these approach to technical leadership may seem like a straw man, but it never ceases to amaze me how often experienced architects and technical decision makers tend toward one of these extremes or the other, failing to realize why these approaches are ineffective.
The art of effective leadership lies somewhere in the middle. It was initially worked out long before the first line of code was even written, and it has become highly refined and widely embraced since then; not in the software field, but on the battle field. In his famous treatise On War, Carl von Clausewitz realized that a battle plan, regardless of how meticulously crafted, could not be sustained during the actual battle due to the physical and psychological complexities and rapidly changing conditions which he referred to as the fog of war. And yet, field units have to work in a well-coordinated way to achieve victory.
In modern warfare, von Clausewitz realized that field commanders could not rely on precise command signals and close coordination from a central command post. Lines of communication may get disrupted, but even if they don’t, changes may be occurring to quickly to direct a response effectively. To cope with the realities of modern warfare, von Clausewitz saw that leaders needed to convey the desired end state of the battle to their subordinate officers down the ranks as clearly as possible before the battle began. In this way, they could rely upon the intelligence and resourcefulness of individual officers and field units to react to changing conditions without losing sight of specific objectives which were essential for final, overall success. Von Clausewitz referred to this as ‘intent of command’. By clearly communicating intent of command, a general could enable increasingly large and distributed forces to make efficient and effective decisions or choose tactics based on immediate conditions, while remaining aligned with the larger context and intended outcome.
The parallels to modern software development are both obvious and important. In addition to coping with the challenges of complexity and rapid change, it is commonplace for large teams to be highly distributed, sometimes globally. A guiding vision that defines the desired end state of a project is necessary, but not sufficient to ensure success. The art of effective leadership depends upon communicating that vision throughout the team at precisely the right level of detail. The technical vision—architecture—must be communicated with sufficient detail and precision to be actionable by each member of the team. Too much precision becomes confining, or even paralyzing; too little is not actionable. To be effective, leadership must leave enough latitude for sub-teams and individual developers to bring all of their skills, experience, and specific expertise to bear on questions of detailed design and implementation. It is like the art of tightrope walking, requiring constant readjustments to maintain balance. For example, the proper dose of detail and guidance may differ with individuals on the team, based on different levels of skill and experience. That’s why effective technical leadership is an art, not a science.
Due to increasing complexity in software, very high levels of expertise in many different areas are often needed to build a single solution. Deep expertise may be needed in areas as diverse as networking, languages, security, data flow, user experience, algorithms or business rules all in the same application. It is rarely if ever the case that an architect or technical lead will possess the highest level of expertise in every area relevant to the project. Even then, however, effective technical leadership would still need to leverage all the skills and expertise the team has to offer. Leadership in this world means empowering such experts to make good design decisions based on their domain of expertise, to address complexities and quickly respond to changes in the project landscape; but based on a clear understanding of the central objectives of the project.
The standard knock against the waterfall methodology, that it is too brittle in the face of change, is valid; but at least this approach ensures an actionable plan. Agile software development seems much more conducive to an intent-of-command style of leadership, but if the desired end state of the project has not been adequately communicated, developers may be at risk of wandering away from the larger project objectives and wasting time and effort on work that doesn’t really advance those objectives.
In Debugging the Development Process, Steve Maguire suggested that “To make it easy to determine which tasks are strategic and which are wasted effort, leads should create detailed project goals and priorities. The more detailed the goals and priorities are, the easier it is to spot wasteful work.” At first glance, this may sound like it’s tending toward micromanagement. Note, however, that he’s talking about defining goals and priorities, not simply tasks or specific implementation details. Maguire instinctively recognized the need to define the desired end state in detail—as well as the danger of not specifying the goals with enough detail to be actionable. It’s hard to argue with such success.
Agile methodology offers flexibility and resilience in the face of unanticipated change, but this doesn’t exempt the project lead from clearly defining the intended outcomes of the project. Even if stories or primary use cases should change during the course of large project, an effective leader needs to assess the impact of those changes, revise goals, and communicate them effectively to the team as well as the project stakeholders. The more code that has already been written, the greater the potential impact that fundamental changes could have on cost, schedule, and design. Changing stories or project objectives constantly, however, causes thrashing—a formula for chaos, waste, and failure. For this reason, developers must have confidence that the project lead will ensure that the goals embodied in the architecture are fairly stable and well aligned to business needs.
In a notoriously misunderstood remark, von Clausewitz claimed that “War is merely the continuation of policy by other means.” He realized that war was not an end in itself, but rather a means to achieve purposes defined by politics and which could not be achieved by other means, such as diplomacy. By the same token, it is rare that software is built as an end in itself. Even the most innovative proof-of-concept work is generally executed as a means toward addressing a concretely defined need. Just as a seasoned general may be asked if victory is possible, and, at what cost; a technical lead may be asked whether a product or solution is technically feasible, and at what estimated cost. While waterfall estimates are notoriously inaccurate, agile estimates are conspicuously vague.
In a moment of unguarded candor, a very seasoned developer once said to me “Nobody knows how long a project will take. Anybody who says they do is just an [expletive].” Though generally unspoken, this is hardly an isolated opinion among veteran software developers. The factors of rapid change and complexity make precise estimates virtually impossible. Yet, most would concede that it is not wholly unreasonable for someone, who is about to commit millions of dollars in funding for a large project, to ask whether it is enough to achieve the desired end state. This dialect of the impossible and the absolutely necessary is part of every major development effort. Anybody who says it isn’t is just being…well, disingenuous, let’s say.
Here again, technical leadership is more art than science. Experience and good judgment are prerequisites, along with the skill to communicate a realistic margin of error, the associated risks of misestimates, and any potential mitigations for them. Perhaps the worst mistake would be to convey a greater degree of certainty or assurance than a given project permits i.e. ignore the unknowns. Pretending you have a crystal ball simply deprives stakeholders of the opportunity to recognize, assess, and manage risks appropriately.
At first glance, the notion “command” might sound misplaced in a discussion of technical project leadership; but understood in its proper context, it really means communicating the mission in enough detail to empower thoughtful and effective action. Von Clausewitz ideas have been widely embraced by military strategists around the world as an effective way to deal with complexity, rapidly changing conditions, and distributed decision-making based on a centrally defined mission. It is a proven approach when the stakes are at their highest. With an open mind, perhaps we can learn a thing or two from a discipline other than our own, where leadership is essential to success.
The art of technical leadership depends on articulating the intent of command with sufficient detail. It’s about making sure the team knows WHAT is to be built with enough clarity to guide effective action. Since there is very seldom only one way to accomplish something, it means leaving the HOW up to the skill and ingenuity of individual developers or feature teams. It also means working with stakeholders to clarify, refine, and revise the intended outcomes when needed, to help ensure that they remain achievable and well understood.
SQL Server in data architecture has evolved greatly over the last decade. One point that we can see in this evolution of the database services is in the availability with functionality concepts. Data architecture requires a keen sense of vision when it comes to combining availability with functionality. While designing architectures that meet both criteria, architects often had to go to great lengths to ensure both would be achieved and considered enterprise-ready.
Is it truly Enterprise?
Prior to SQL Server 2012, several areas of designing a full data solution took time. These areas always revolved around data availability in terms of limited data service interruptions, capability of functionally accessing data services while not directly affecting or causing interruptions, and an overall functional achievement of data intelligence. Data intelligence forms from all of the major points in a data architecture. Availability with functionality is one foundation to how intelligently we can bring data to end users or services consuming data. To achieve a data architecture while maintaining the data intelligence that truly forms enterprise data solutions, prior versions found us stuck in one solution with customizations.
Figure 1 depicts a setup that was the primary option with SQL Server for many years. This setup is composed of a cluster, disk subsystems that implemented their own availability, entry points into the clustering resources and a cloud setup directly in the layer of clustering. The cloud setup often contained several service type solutions, such as agents, jobs schedulers, customer routing and so on. These services were often found lacking the ability of availability and in the case of a true disaster or customization need, they were often found to be the pain points in an architecture.
Figure 1 – Typical single data service architecture – Full Cluster Instance
Figure 1 offers several areas for improvement. We can uncover those areas by running scenarios through use cases. For instance, imagine this single data service is one of many services in the overall architecture. A task comes in that requires the single data service to enlist a new reporting mechanism. While that reporting mechanism is critical to the business, the ability for the data service to maintain its high performance ability is even more critical.
Tasks like these always required a great deal of design, testing and implementation. Adding reporting layers on top of a data service in a clustering configuration as above leaves us with choices such as replication, mirroring with snapshots, database snapshots or custom SOA. All of these options have merit and can be developed successfully. The problem lies in the time to market we will have in implementing them. We’ll also look at other areas of concerns such as Disaster and Recovery and Secondary Failover later as we uncover them.
SQL Server 2012 Enterprise
Let’s start to uncover what we’ve found in SQL Server 2012 Enterprise and see how it can be adaptive to an enterprise conditioning in the same scenario.
SQL Server 2012 took on a great deal of change on the surface, as it relates to high availability with functionality. That change was implemented as AlwaysOn. AlwaysOn was a true buzz word – so much so that the technical community and leaders quickly abandoned it. The true power that we have now was not AlwaysOn but was under the term, in the Availability Groups feature.
Availability Groups provide a certain level of availability that was never easily achieved in prior versions of SQL Server. Of course, figure 1 implements high availability with clustering. That clustering is a protective layer over the hardware, network between the two physical servers, or operating functions failing. However, the one key piece that was misleading and often overlooked was data protection. Data protection was a serious flaw in the previous clustering setups. This flaw was only exaggerated by now thoroughly thinking through the entire data architecture as it related to failover scenarios. In figure 1, we have a single point of failure or a single point of data service disruption – the disk storage. Figure 1 does outline the disk storage as being redundant as well. This redundancy in many cases is not a seamless recovery point to SQL Server. Technologies such as SAN replication were a great method of protecting disk and recovery points; however, in most cases, SQL Server would fault or need manual intervention in order to promote those recovery points to production. So we can see, the data in figure 1 is a point of interest in expanding or enhancing the architecture.
Availability Groups engage data protection with hardware, network and operating functions protection. As a bonus, Availability Groups also engage availability with functionality. This is outlined in figure 2. Figure 2 has a much more complex architecture in the form of paths than figure 1. These paths are what set it into a level of enterprise that we will soon understand. There is also a fundamental clustering change in figure 2. This is a method of clustering in which each node persists on its own behalf while enacting availability based on which node has power or which node has been lost in the entire architecture. Another point we can see is in the red dashed line coming from a far right node and servicing data consumers. This red line depicts a disaster scenario. In figure 1, we lost this ease of setup. Such features as log shipping or the slower backup and restore method would have been implemented. Those features would spread off to nodes that were not part of the entire node set. In figure 2, this recovery point can be part of the architecture and within the same feature and designs.
Remember our scenario, “Imagine this single data service is one of many services in the overall architecture. A task comes in that requires the single data service to enlist a new reporting mechanism. While that reporting mechanism is critical to the business, the ability for the data service to maintain its high performance ability is even more critical. ”
The scenario we are discussing takes on a different implementation and strategy into the architecture in figure 2. Note the labels on the major blue paths to the inner red box within the overall blue box depicting the entire cluster. These paths are labeled with either read or read/write. In availability groups, we are provided a solution of read-routing. This routing mechanism provides the ability to route connections that specify a read-intent attribute. With this distinction in the connection, data consumers in a reporting situation can be routed away from the active transactional or primary replica in the architecture. Imagine now, design and implementation time in this solution compared the figure 1. This is a great deal more adaptive to a situation we would have considered enterprise-capable.
Figure 2 – Availability Groups with Windows Server Failover Clustering
The last point that can be uncovered is the single point of failure, disk, from figure 1. In figure 2, we still remain attached to a SAN or some other external disk system. That disk should still implement some sort of recovery plan either in replication or mirroring. However, notice that in figure 2, we are now relating data in each node and each node has a repository segmented on the disk. This is set up this way due to data protection in the availability group. This data protection is the same technology that we have seen in the past with mirroring. However, mirroring lacked some fine-tuned designs that Availability Groups fixes by combining an entire solution of hardware, network, and data and operating system level protection. In this scenario, it is completely functional in the event of a data failure caused from a failover in the data layer but still can maintain the same node on an operating system level.
Availability with functionality is layered upon the already existing features achieved in Availability Groups. Building the architecture out and up with the same features we had in figure 1 or, previous SQL Server releases, are still viable additions to figure 2′s architecture.
This has briefly portrayed the major advancements SQL Server 2012 has provided in enterprise solutions and data architecture needs. Availability has always been and should always be an absolutely critical aspect to your data service. Combining the need for functionality coupled with availability, advances the data architecture into a true enterprise solution that is open to scalability and enhancements while maintain the highest of data quality.
The team from BlueMetal Architects will be at SharePoint Saturday Chicago on 11/2/13.
Adam Turner will present a case study on “SharePoint 2013 Search,” including Content Source Crawling and Managed Property Extraction, Federated Search with YouTube, and Integrating Google Maps API with SharePoint 2013.
Darya Orlova will present “Integration of Yammer with SharePoint.”
We are Gold sponsors of the event, so please stop by the booth and say hello to the team!
Registration is full, but you can add yourself to the waitlist here.
We hope to see you there!
In case you didn’t notice, Microsoft seriously moved the cheese for SharePoint developers in SharePoint 2013 with its new “App” model. Since the beginnings of SharePoint, developers have deployed code that runs on the SharePoint servers themselves, yet with this new model, code runs in the browser, on an external server, or in Windows Azure – pretty much anywhere except in SharePoint itself!
While the old ways still work, and remain necessary for some tasks, developers are encouraged to rethink the way they develop for SharePoint. As explained in this article, there are a number of advantages to the new model. It’s a lot like moving from MS-DOS development, where code could do anything (including destroying the server!), to a new phone app, where code runs in an isolated, tightly controlled environment. Developers may grumble, but it’s the right thing to do.
At BlueMetal, many of our clients are interested in this new way of programming but still aren’t ready to start using SharePoint 2013 Apps. The good news is that it’s possible to make most of the change by simply changing the approach to development, even in SharePoint 2010. If and when a client is ready to move to SharePoint 2013 Apps, the code comes across almost completely, and only the packaging needs to change.
BlueMetal has already begun using these techniques. For example, my colleague Julie Turner recently wrote an elaborate dashboard that runs completely in the web browser and is packaged in only a “content editor web part”. This works in SharePoint 2010 and as a SharePoint 2013 App! Not only does this work for her client (who couldn’t use a “farm” or “sandboxed” solution), but it was easily ported to the new App model as well.
I just published two samples which illustrate these new techniques, along with detailed instructions.
Future Proof Solutions Part 1 is a site creation solution that lists and creates new SharePoint sites. Using this web part allows end users to find and create sites in a consistent and simple manner. Two versions of the code are available: one is packaged as a SharePoint 2010 content editor web part, and the other as a SharePoint Hosted App for SharePoint 2013.
Future Proof Solutions Part 2 is a location mapping solution that geocodes and maps contacts and shows them in a web part. It also shows how to use the new Geolocation field and Map View in SharePoint 2013. Again, two versions are available: one is a SharePoint 2010 Visual Web Part and event receiver, and the other is a SharePoint 2013 Provider Hosted App with a remote event receiver. Nearly all the code is common, even though the packaging is very different.
Please check them out, or send your developers to learn how to build SharePoint solutions that will work today and tomorrow, on premises or in the cloud. Or give us a call if you’d like us to help build a future-proof solution for your business!
We’re at SharePoint Fest in Chicago this week! We’re Gold sponsors of the conference, and many of the members of our IM team will be in the vendor hall and attending sessions.
Our own Bob German will present two sessions:
“Future-Proof your SharePoint Customizations: Build 2010 Solutions that become 2013 Apps“ at 3:10 pm CT on Tuesday 10/8
“Search-First Migration: Using SharePoint 2013 Search for All Versions of SharePoint” at 11:20 am CT on Wednesday 10/9.
If you’re at the conference, please stop by Booth G10 and say hello to the team!
Yesterday, Massachusetts House lawmakers voted to repeal the “tech tax,” or Sales and Use Tax on Computer and Software Services. We at BlueMetal strongly supported this repeal, and we wanted to share our thoughts on the tax.
The background: on July 24, 2013, the Commonwealth of Massachusetts announced Technical Information Release (TIR) 13-10 which allowed the state to apply sales and use tax to certain services related to computer system design and to modification, integration, enhancement, installation or configuration of standardized or prewritten software. These changes were effective July 31, 2013. This piece of legislature was slated to draw approximately $161 million dollars in revenue from the tech community in state.
The realities of the adoption, in conjunction with the haste at which the tax was imposed, is that it is difficult (if not impossible) to implement this tax law. It required consulting service organizations that were implementing standardized software for clients to segment time tracking down to the hours that were spent on each particular facet of the client experience from discovery, scoping, coding, rollout and training. If consulting were a simple formula then this would be fine, but anyone who is actually involved in a consulting practice knows that adaptations to plan happen as a normal course of doing business. Imposing tax to a portion of a consultant’s time based on the loosely defined parameters of this legislature is nearly impossible to do with any margin of accuracy.
From a direct business perspective, we at BlueMetal were not going to be impacted directly by this tax law for the following reasons:
- We are not in the business of implementing standardized software as it is defined in the existing Computer Industry Products and Services Regulation, 830 CMR 64H.1.3, as follows:
“computer software, including prewritten upgrades, which is not designed and developed by the author or other creator to the specifications of a specific purchaser. The combining of two or more prewritten computer software programs or prewritten portions thereof does not cause the combination to be other than prewritten computer software. Prewritten computer software includes software designed and developed by the author or other creator to the specifications of a specific purchaser when it is sold to a person other than the specific purchaser. Where a person modifies or enhances computer software of which the person is not the author or creator, the person shall be deemed to be the author or creator only of such person’s modifications or enhancements. …”
- Modifications to prewritten software that are subject to tax under the new law are modifications to software which is licensed, sold or otherwise made available to more than one user, where such prewritten software is modified for the use of a specific customer. The modification may be made either by the original seller/licensor of the software or by a third party. For purposes of this tax on modification, integration, enhancement, installation or configuration of standardized (prewritten) software, prewritten software does not include proprietary code owned by the provider (seller) of the modifications if that proprietary code is not separately licensed to customers. Custom application software (including custom software that incorporates such proprietary code) that is designed to run on a prewritten operating system is treated as custom software and not as a modification of the prewritten operating system software.
- There had been speculation regarding taxability of Open Source software and the State came back in August to revise their response as follows: Open Source Software is available free on the Internet. Thus, no tax applies to the transfer of Open Source software where there is no consideration for the transfer.
- Where we were also concerned initially in regards to our IM services particularly with SharePoint migration and upgrades I researched and found that services regarding data conversion and data migration are considered exempt data processing services and remain non-taxable under the new law so long as the charges are separately stated and set in good faith. This includes data conversion and/or data migration of a customer’s data from the customer’s legacy software to the new system. These data services may include, but are not limited to, formatting data, loading of data, data monitoring, data migration, and data conversion. Data conversion is defined as a process of converting computer data from one format to
However, because of the aforementioned practical application flaws, we have been advocating strongly for the repeal of this legislature. Similarly, in response to push back from the tech community the Governor, the Senate President, Speaker of the House and the Commissioner have all released public statements in support of repealing this new sales tax provision. We applaud the swift action to repeal the tax.
Sometimes the more things change, the more they stay the same.
A few weeks ago I attended the Financial Forecasting & Planning Innovation summit at Boston’s Seaport Hotel. The summit advertises as bringing together finance executives in charge of financial planning and analytics to discuss innovation in the field and learn from each other. Industry conference and trade shows I find to be some of the best places to meet professionals and to learn about the current state of the art. This summit in particular was interesting because it had a strong focus on the technology used by finance forecasting professionals and their experiences in implementing it. As a result it also attracted a fairly large number of salesmen.
It’s been a few years since I have dealt with financial forecasting. Last I left the field it was a sysiphean task of crunching numbers that were obsolete before the results were even ready and meaningful guiding analytics were a vague dream beyond the horizon.
Some things have remained the same – 85% of companies are still spending the majority of their time organizing and collecting financial data and only 15% are using it to steer managerial and executive decisions. CFOs are still being fired for unsuccessful CPM/ERP implementations and financial forecasting consultants are still trying to consume budgets instead of providing value (which drives fear into the financial execs facing new implementations). There is still a dearth of quality software out there.
Here is what I’ve learned about the challenges of the attendees when dealing with their financial software solutions:
1. Most of the work week is spent on organizing data instead of analyzing it. This is the big one. As long as the flow of financial information is not smooth and integrated, the financial professionals will always be serving data, instead of having information serve them.
2. Turning data into information and gaining visibility into the drivers of business processes. Once your organization spends less time on collecting and organizing data, it hits the next wall of trying to convert it into information. This includes establishing key financial metrics for business units and drivers of business processes. The challenge here becomes less technical, instead it appears that most organizations are hesitant to invest large amounts of business analysis time to properly create processes that extract information from the gathered data.
3. Getting buy-in from business owners. This challenge is more political, but the first two challenges may be caused by non-cooperative business owners who shirk their responsibilities to provide visibility into their business units.
4. Owning the financial systems by the finance department instead of relying on external (consultants) or internal (IT) owners. This includes training to be conducted by finance professionals. Finance departments that have previously been hit with scope creep and runaway budgets are now taking the radical step of directly assuming control over the FP&A systems instead of having them managed by the internal IT departments or having a support contract from a consulting company. This underscores the issue of inefficiency of implementations.
5. Fixed price, low cost, quick to implement. Financial software is complex and fallible to all of the standard project management issues such as scope creep, cost and time over-runs. Due to the level of complexity and the critical nature of the software, this can play a bigger role than other internal IT solutions.
While these are ongoing challenges for finance professionals, there are also new opportunities that are opening up for vendors and modern solution creators in this area. One of the larger themes of the conference are the greater expectations of finance professionals from their vendors and their software solutions. I would also like to highlight new opportunities that are not yet matched by quality vendors:
1. It is still difficult to create a financial system that adds values instead of connecting a graphics package to a database. I think the greatest opportunity still remains the lack of investment of business analysis in FP&A software, which leads to the customers spending more time on manual labour. Someday a company will appear that will invest the necessary time to create properly automated FP&A packages.
2. Dealing with different planning strategies. Most FP&A aren’t geared toward different strategy scenarios. What if I want to concentrate on increasing the efficiency over the next 12 months instead of growing the top line? What if we’re interested in contracting by winding down operations instead of growing? How do I plan for that and what financial impact will those scenarios have? I would like that in 3 keystrokes or mouse clicks, please.
3. Dealing with macroeconomic uncertainty and market-level issues. Currently forecasting large macroeconomic events is done mostly by spreadsheets. Tracking the current state of the economy at large and the specific market of a company relative to the financial situation of a company automatically still remains a dream. At some points FP&A systems have to evolve to start including macroeconomic events.
4. Forecasting based on P&L vs balance sheet management. Some businesses require strict covenant and cash planning, either due to the nature of a capital-intensive business or a restructuring plan. Other businesses are in full growth mode and are focusing primarily on their P&L. These two firms require very different methods of planning and similar to scenario planning, the FP&A software has to be able to account for that.
5. Forecasting for Lean methodology. For companies implementing Lean methodology, there currently no convenient to do value stream forecasting to monitor changes in future value addition of different business processes. A quality product in this space would make it a lot easier to manage forecasting for a Lean organization.
6. Integration with product lifecycle management. Most products offered by customers follow the traditional model of product lifecycle management with growth, plateau and decay. Currently it is a struggle to get a financial forecasting package to recognize different stages of a product’s life to plan accordingly.
9. Financial tools for the financially illiterate. Last but not least is the powerful idea that some financial tools should be purposefully designed for the financially illiterate. A lot of business stakeholders do not fully understand basic accounting principles and will not engage with an FP&A effort simply because they are dealing with unfamiliar, complex concepts. There exists a great opportunity in creating simplified products that could be understood by someone without a background in finance.
I hope that these insights were helpful towards understanding the existing challenges and opportunities in the FP&A market. Please don’t forget to comment below if you would like to share your opinion on this blog post.