21 December 2012

Intel and the Big Search Box

I was recently researching new ultrabooks and visited the Intel site for some insight on current processors.  When I arrived, I was  a bit shocked at what I saw…

Intel Home Page

Source: Intel.COM

Waiting for me  was not the typical corporate web site, with menu-based navigation (at least not immediately apparent).  Rather, Intel was presenting, dare I say, a very "Google" like approach – enter a keyword-based query to find what you want. 

I found the whole experience a bit off-putting.  I don't generally recommend our clients lead with a search-based content findability approach. In fact, I've argued against even making search a primary content findability technique.

If anyone has insight on this approach (e.g. real analytics supporting this experience strategy over more traditional IA), I'd love to hear about it.

18 December 2012

Interesting cause for exception: System.InvalidOperationException: $metadata Web Service method name is not valid.


Interesting cause for exception: System.InvalidOperationException: $metadata Web Service method name is not valid.

While deploying a classic web service to a production server recently, we came across an issue when calling the service from a test web application.  We were able to add a web service reference to our test harness web application with no issues.  The web service would load with no issues in a standard web browser as well by url:


When calling the web method the response would come back with no errors.  We noticed the data was not being saved to the server as expected.  After checking the event viewer on the server we found this error that made no sense to us nor offered much detail:

 System.InvalidOperationException: $metadata Web Service method name is not valid.

After searching the web a bit, we learned this was more so a mask error that covered up the true issue.  After digging around for several days, I attempted to test the service directly in a debug state in the server's browser.

The service responded with:

Could not load file or assembly 'Telerik.OpenAccess, Version=2012.2.816.1, Culture=neutral, PublicKeyToken=7ce17eeaf1d59342' or one of its dependencies. The system cannot find the file specified.

We somehow published a version of the telerik.OpenAccess dll that was different then what the service was looking for.

After re-publishing the service with the correct dll version, the issue completely resolved and the web method started saving data to disk.

Lesson learned, the meta data error that originally prompted our research had no relationship to the real exception.  If you come across this error in the future, debug the service straight against the problematic server's web browser to get the true exception.




10 December 2012

Interesting Code Exception–UnwillingToPerform

I have recently been busy putting the finishing touches on a new web application for a client in Indiana.  During the course of the project, we discovered that we needed to write a custom membership provider that would use secure LDAP to authenticate users (the ActiveDirectoryMembershipProvider does not, surprisingly, support the client's environment setup). 

While developing the basic provider was relatively easy to complete (and used an Oracle LDAP provider we built previously), we ran into a series of challenges.  One challenge is now a support incident at Microsoft (more on that in a later blog).

However, during one of the debugging sessions, we noticed an "unspecified operation error occurred" exception being thrown.  Digging down, we discovered the reason code: "unwilling to perform"

image

I have never thought about putting that kind of reason in my code, but if Microsoft can do it, perhaps it's O.K. Smile

03 December 2012

Write Better

When I started working as a consultant in the latter half of the 90's, our firm had writers that specialized in writing for the web.  These folks constantly had to help clients refine and organize traditional print copy for the "new" medium.

Recently, Jason Fried over at 37Signals published a short blog post that referenced a post by Maria Popova at BrainPickings.org highlighting writing advice from David Ogilvy, the original "Mad Man."  What I found fascinating is that the advice that Ogilvy delivered in 1982 is much the same advice our writers gave clients more than a decade later (in the context of writing for the web).

Clearly, good advice never goes out of style.

18 September 2012

Adoption: The forgotten key to Success

A solution is not just technology.  A solution is not just new features and functions.  In fact, a solution has many facets, including adoption.  While adoption is critical to the success of any solution, it is often ignored.

Organizations who are interested in successfully introducing a new technology or solution, need to spend as much time considering and developing an adoption plan as they did building the solution.  Here are a few keys to adoption success:

  • Communicate early and often with the end user community 
    Start well in advance of any change.  Start with letting folks know that a change is coming and why it’s important to them.  Create a clear connection between a challenge you know they have and how the new solution seeks to address it.   Once you reach specific milestones in the solution’s development, send out additional communications with more specific information and dates (like when folks should expect to see the solution). 
  • Introduce the new solution properly
    Spend time developing ways to acquaint your end user community with the new tool or function.  Simply sending an “informative” e-mail is insufficient; try something new.  For example, if you’re introducing a new Intranet, have a “scavenger hunt” to find specific information or content; the “winner” should get some prize for being the first to find the requested item (e.g. a gift card to a local restaurant or simple public recognition).  This helps introduce the new site, gets people to use the tool AND “tests” the new information architecture.  Feedback from the event can also help you avoid questions in the future.
  • Create triggers for using the solution
    Dr. BJ Fogg runs a persuasion lab at Stanford university.  Dr. Fogg’s Behavior Model suggests that while motivation and ease-of-use are important, people still need to be “triggered” to exhibit a specific behavior.  This means that no matter how easy the solution is to use, nor how motivated your end user community happens to be, you’ll still need to “remind” them to use the tool until a habit of use is formed.  Dr. Fogg’s research involved how Facebook uses e-mail notifications to “trigger” people to return to the site.   Think of ways to trigger your end-user community to constantly use the tool (e.g. remind them of what problem the tool solves OR how they can save themselves time by using the tool).
  • Gather feedback liberally
    Whether you’re speaking personally to people or using a more generic survey, always gather feedback.  Your solution might be fantastic, but there will always  be room for improvement.  Demonstrate you’re interested in your end user community by asking for feedback on what works, what doesn’t and how the solution can improve.  Clearly, you’ll get lots of opinions, but what’s important will be the trends you can discern; these trends will represent what’s important to the most number of users and where you should focus your attention.
  • Constantly evolve
    Once you’re done with the initial implementation… you’re not done.  Any solution will need to evolve to stay relevant to the end user community it serves.  Use the feedback you’ve gathered, combined with what’s happening in the broader organization and/or marketplace.  Create a real “road map” for the solution’s improvement.  Once you have that road map in place, start communicating that roadmap to your end user community; this too could be a trigger, as well as building support for the next release.

You will spend a great deal of time developing the right solution for your organization.  You need to spend just as much time making sure everyone uses it and that it adds value to your firm.

18 July 2012

Office and SharePoint 2013 Products Revealed

On 16 July 2012, Microsoft publicly announced the previews for latest versions of Office and SharePoint 2013.  For many of our clients, the announcement will be significant as these products may form the foundation of their information management and collaboration architectures for many years.
As a part of my continuing work with the Real Story Group, I just published a short blog post on our initial SharePoint 2013 advice for dealing with the avalanche of content that will be broadcast about the platform.

28 June 2012

SharePoint in Geographically Disperse Organizations

As SharePoint 2010 continues to grow in popularity among larger organizations, the collaboration platform is likewise being used by an increasing number of geographically dispersed companies that see it as a tool to keep far-flung employees on the same page. Because of SharePoint’s architecture, however, such implementations can add new administrative burdens whether you’re trying to keep workers across North America or around the world connected and communicating with one another.

SearchContentManagement just published an article I wrote on how to manage SharePoint is these distributed organizations.

21 May 2012

Improve Code Reuse with Search

In past, I have been critical of simply crawling content with a search engine and using the basic keyword query approach to find things.  However, there are some situations that do lend themselves to just this approach (though with a tad bit of “intelligence”).  One of these situations is code reuse. 

At Consejo, we use Subversion (coupled with VisualSVN Server) for source control.  Subversion (or SVN) is an open-source source control system. VisualSVN Server provides a terrific web interface to the repository.  Combined with the VisualSVN plugin for Visual Studio, we are able to very effectively manage our code production across clients with little or no cost.  Unfortunately, with our team dispersed, it’s sometimes difficult to make everyone aware of what’s already been built for one client or another.

SIDE NOTE: I have personally been critical of using open source solutions and even wrote an article arguing against the open source model.  However, it’s clear there are cases where open source makes sense (yup I was wrong).  Further, we’ve been a financial supporter of SVN (through donations) since we started using the tool regularly.

Much of what consulting companies do is based on past work.  It’s critical to how we work that every consultant is kept abreast of what the firm is doing, across industries, disciplines and clients.  For example, we recently built two different applications, for two very different clients.  Both applications, however, used the same licensed interface controls.  In fact, many of the interactions each application’s user base had with their respective application were very similar.  As a result, techniques we used, problems we solved and utility code (code not specific to a client or application) we constructed could be leveraged across both projects with minor updates.  This saved both our consultants and our clients time and, more importantly, money.   Unfortunately, since both projects overlapped, there was no a good way, besides knowing team members on both projects, to capture and surface code reuse opportunities in an automated way.  Each team has to basically know what the other team was doing and ask specific questions (or discover reuse opportunities through happenstance).  Not a fabulous nor scalable model. 

While we solved this problem the “old fashioned” way, it’s not a good long-term solution.  As projects get more complicated, team members more geographically or temporally dispersed, the “old fashioned” way becomes very burdensome and super inefficient.   So what’s the solution?

Historically, we’d been looking for ways to effectively expose our SVN repository to our consultants, outside of Visual Studio and a web browser.  The thought was that if developers were able to search for specific code constructs or even development pattern names, there was a reasonable chance of finding reusable code snippets or whole libraries (if they existed).   And, while there are a few tools that help in this regard like FishEye and SvnQuery, neither was a perfect fit for our needs.  Since VisualSVN Server presents the repository as a web site, why couldn’t we just use a standard search engine like MS Search Express, SharePoint or GSA (Google Search Appliance) to crawl the repository and allow consultants to query it like a web site? 

The answer, as Kenneth Scott points out in his blog post on crawling VisualSVN with a search engine, is “not exactly.”  The problem stems from the way VisualSVN renders the repository web interface (you should read his post for the details).  However, Kenneth solved this problem through the use of an HTTP handler and gave us exactly what we needed.  Using his utility, we can indeed use a standard search engine (take your pick) and then allow our developers to search for an example of an excel-like editing experience using a Telerik GridControl or discover if we’ve built a custom authentication provider for SharePoint; both of which we’ve done, but only one I knew about until recently.

You may still be questioning why this is a good idea?  I’ve been critical of this kind of shot gun search approach in the past.  Why should this work now?  The reason is the very narrow information domain and the very specific terms developers use.  Both work together in a way that makes finding content, using straight keyword queries, more reasonable.  For example, if I want to find example of our use of the Telerik GridControl, I can simply search for the RadGrid class name.  If I want to figure out if we’d built a .NET membership provider, I can search for the inheritance statement.  In the first case, I may get an overwhelming number of results.  However, they’ll all be examples of use, since the only time that class would appear is when I’m using it in code (in other words, relevant).  The second example would produce far fewer results and likely give me exactly what I need immediately (and relevant and, probably, highly precise).

In the end, I’m only really disappointed that I didn’t discover Kenneth’s blog post sooner, but my past search queries were about SVN or Subversion, not VisualSVN (poor search strategy on my part).  However, as we develop this code search feature inside of our intranet, I’m excited by the prospect of finding internal examples of code we can reuse.

If you have a similar feature inside your firm, I’d love to hear how it works and if it’s yield higher code reuse.

04 April 2012

The Myth about SharePoint Browser Support

Microsoft posted a blog entry today that pointed readers to SharePoint’s browser support page on TechNet.  In this post, they detail what browsers SharePoint supports and any specific support limitations.  However, I want to raise an important point that seems to be missing from the conversation: browser support isn’t entirely about SharePoint.

Unfortunately, what most everyone fails to mention is that browser support is actually a combination of what Microsoft supplies and the solution that you’ve built.  In other words, Microsoft’s support, or lack thereof, for specific browsers is limited to Microsoft-supplied interfaces.  Depending on the type of solution you’ve developed, much more of your solution’s browser capability could be dependent on your development than on Microsoft’s.

Here’s a quote from TechNet:

“For publishing sites, the Web Content Management features built into SharePoint Server 2010 provide a deep level of control over the markup and styling of the reader experience. Page designers can use these features to help ensure that the pages they design are compatible with additional browsers, including Internet Explorer 6, for viewing content. However, page designers are responsible for creating pages that are compatible with the browsers that they want to support.

Obviously, this quote relates specifically to publishing sites; primarily those internet-facing sites that are primarily content serving sites as opposed to more collaborative/intranet/extranet kind of sites.  However, even in the case of sites built with other site definitions (like Team Sites), browser support can and will be affected by new master pages, custom web parts or other components supplied by you or 3rd parties.

The myth here is that SharePoint’s support for specific browsers is somehow exclusively Microsoft’s domain.  In fact, true browser compatibility is a combination of Microsoft supplied interfaces (that are used in your solution) and those solution-specific interfaces that you or a vendor create.

06 March 2012

[Good] Communication is Key

Every now and again, a really fantastic opportunity to illustrate a best practice just falls in your lap.  The opportunity, just as occasionally, presents itself through inspiration found in the most surprising places.  In my case, I found this communication example in the men’s room at a client: 

IMG_0590

This sign provides an excellent template to use when communicating with your audience: 

  • It starts by presenting the message at the time when the recipient is engaging in a related behavior
  • The messaging points out a very specific feature of the tool being used
  • A statement of community support for that feature is provided
  • An aspirational goal is included to (through implication) encourage a specific, future behavior
  • All of this is followed by a polite.. thank you!

Ignoring the specific subject matter involved in this message, this could easily serve as a model for feature-obsessed technologists and enthusiastic Intranet managers on how to encourage intranet or technology adoption.

What do you think?

[EDITED TO CORRECT SPELLING AND WORD CHOICE]

31 January 2012

Why “Search” instead of “Find”

In July 2010, I posted “Can search really solve information ‘finability’.”  Since that post, I’ve run into a number of clients who continue to insist that they need to search (even if “search stinks” in their organization).  Yet, almost universally, they report the search experience misses their expectations.  This should be no surprise, as lots of firms struggle with this very problem.  However, I’d like to suggest an alternate hypothesis: they actually want “find,” not “search.”

For better or worse (mostly worse), loads of folks closely associate the act of search with the expectation of locating desired content.  Typically, at least outside of an organization, this means assuming they should navigate to Google or Bing (mostly Google). When they arrive, they enter a few keywords and press “GO.”  At this point, they’re presented with a set of results from which they choose one that looks promising.  In part, any perceived success is just that; they don’t necessarily expect to find what they want.  If, however, they happen to find the object of their desire, they are pleased. 

While both “search” and “find” are verbs, search does not imply a goal, simply an action.  Search describes everything in the scenario I just relayed except the part where you’ve found the appropriate destination.  Find, by contrast, is explicitly the goal and characterized by viewing your content.  If you need a very concrete example, if child goes missing, the goal is not to search for the child.  The goal is to find the child.

Therefore, in regards to content, consider changing the conversation.  Use the word FIND instead of SEARCH.  In doing so, you begin to think of the goal and not of one particular approach.  Further, if we focus on finding content, we can also measure a success rate.

This orientation change opens up a whole world of opportunities.  For example, start with the simplest model: place relevant content on the first page they see on an intranet.  While this may seem wholly impractical (too much content, no context to judge relevance), this kind of solution is possible.  Use what you know about your employees/users.  If you know within what department they work, you can begin surfacing content from their department. Not specific enough? What about adding in the role they serve and surface content targeted to that role (NOTE: a good driver for metadata).

Beyond actually locating the content in plain site, try surfacing tasks associated with the desired content. For example, display tasks like “Submit an Expense Report,” instead of requiring users to search for the expense report form or “Fill out a Timesheet,” which links to the time reporting system (or simply an interface to immediately report time).  In this way we’re presenting navigable elements that are easily understood and focus on actual tasks employees/users need to accomplish without the need to search.

However, if you really must provide search, give them help.  Provide them with a targeted search facility that enables them to narrow the scope of the search (i.e. don’t return results from the entire enterprise if they’re looking for a project related document).  Give them metadata to enable precise queries like “Author = Jane Smith.”  Finally, give them “canned” or pre-developed queries created by “experts” who can construct search queries that ensure result precision.  In this way, we’ve moved away from the simply keyword-driven approach to a more intelligent model that reduces “noise” and improves precision through carefully use of the technology. As an interesting alternative, execute these queries in the background and simply display the result like other navigation on the page; in this way, the user doesn’t actually have to execute the query and you’ve just saved them a step in finding content.

In short, search (as a tool) must necessarily become one of an array of techniques used to find content.  However, it should absolutely not be the first or only approach.  We must think in terms of FIND, not SEARCH.  Find is the concrete goal and has a measurable success rate; search is simply an action that, while measurable, does not necessarily lead to your user’s goal.

24 January 2012

Multilingual SharePoint Sites–Part 2

Some months back, I wrote a post about the basics of multilingual sites in SharePoint.  The post was a good primer for anyone that needs to understand SharePoint-centric concepts regarding multilingual web sites.  Unfortunately, the post didn’t really describe important details outside of the SharePoint sphere.  In particular, the post excluded all the ASP.NET-centric details.  In this post, I want to share at least some of those additional details.

Globalization

One of the early design goals that Microsoft had for SharePoint was that ASP.NET developers would be comfortable creating SharePoint-based solutions.  The theory is that if you’re a competent ASP.NET developer, you can simply pick up the additional SharePoint API universe; SharePoint is a good .NET citizen, so this idea shouldn’t be a stretch. 

Whether or not you believe a good ASP.NET developer could easily pick up SharePoint, SharePoint does borrow very heavily from many .NET facilities.  With regard to multilingual sites, this includes the Globalization namespace

The Globalization namespace is a group of classes that are responsible for allowing ASP.NET applications to understand the numerous languages and cultures that applications can target.  It includes everything from calendar differences and languages to date/time formats and string comparisons (and a whole lot more).  It also, importantly, provides a facility to allow developers to create a resource pool of commonly referred to assets (e.g. element labels, images).  These assets are all referenced using standard labels, creating an index of asset variants for each culture.  At runtime, based on the culture of the current user (usually indicated by a browser setting), the .NET framework will dynamically select the appropriate asset/resource based on a generic label describing that asset or resource. 

Resources and Resource Files (RESX)

Resources or multilingual assets are defined in a Resource File (RESX). There’s a resource file for each culture represented in the application. All resource files use the same labels to describe the asset, but with a culture-specific value.  For example, the text shown next to the text box where a user would enter their user ID to authenticate with the application would be an example of a reference resource. 

image

Figure 1 – Resource example for Login Page

In the RESX file, which is just XML, you’ll find the following entry

image

Figure 2 – Login_UserID label in the EN-US resource file

To add, edit or delete values, Visual Studio provides a “designer” view of the file.  In the designer, you have the ability to quickly and easily define the various labels and the corresponding values.  Figure 3 shows the Visual Studio interface for editing a RESX file.

image

Figure 3 – Visual Studio designer interface for RESX file

For every culture your application needs to support, you create a specific RESX file.  Each file would be named for the culture it supports and every file would contain the same labels, with values corresponding to the specific culture.  The example presented here, the RESX file is for the culture EN-US (US English). For more information on resource files, naming conventions and details on creating the files, take a look at this MSDN article on resource files

In SharePoint terms, the RESX would correspond to a specific variation of the same culture.  In effect, you will have at least one RESX file per variation.  These files define elements of the user interface that end users do not supply.  Whereas content on any given page is created and managed by content contributors, there are also elements, like the label for the User ID field on the login page, that are “baked” into the code of the application.  RESX files provide a mechanism to define what that label will say in the context of a specific variation or culture selection (usually set by the browser displaying the page).

Information Architecture and Visual Design

Beyond the somewhat mechanical processes for inserting culture specific content into a page, a more critical aspect of multilingual sites is Information Architecture (IA).  The IA defines the navigation paths (global navigation and its relationship to other sections and pages within the site) and the overall interface layout.  This means the decisions about where various interface elements are placed, what nomenclature is used and what sort of content is shown are all made through and by the IA (both the person – Architect – and their output – Architecture).

When developing a multilingual web site, consider that the interface will be at least slightly and potentially radically different based on the language being displayed.  The simplest example is word length.  If we compare an interface in German and one in English, it’s very likely that there will be different space needs for labels in the navigation, as well as content.  As a result, the IA must anticipate interface movement and allow for enough white space to accommodate an interface that will grow and shrink based on the language’s need.  This too is a challenge for constructing HTML and JavaScript, since both components of the web page may need to “react” to language differences.  However, beyond this relatively easy challenge, presenting content is matter of having the appropriate language-specific content.

A more complicated scenario is one involving differences in how a language is read.  For Hebrew or Arabic (as two examples), the languages are read right to left.  As a result, the whole orientation of the interface needs to shift.  The main navigation will need to start from the right, global navigation elements will be positioned in the upper left and text will flow from right to left within the content sections.  As such, you may require a unique master page and page layouts for these languages to sufficient accommodate the display differences.  The same is true for languages that are read vertically instead of horizontally as in Manchu.

Continuing with the above example, you also have the challenge of fonts.  Languages that utilize radically different character sets will require the IA and the designer to consider both font face and size choices. For example, for any font size choice in the cascading style sheet, will the text be readable across all languages represented by the site.  Most European languages have characters with relatively little detail compared with Asian languages.  Font sizes that are too small or font faces that carry too much embellishment may make detailed characters muddled or simply unreadable.  As such, these choices represent both a visual design and information architecture challenge, since font size differences will also present spacing issues to resolve.

Bringing it all together

With all of the details provided in the two posts of this series, here’s a quick review of the important parts:

  • When developing your Information Architecture for a multilingual site, include the various cultures included.  Each culture, in SharePoint terms, will be a “variation.”  A culture, remember, is a combination of a language and a country, represented like EN-UK (English – United Kingdom) or PT-PT (Portuguese – Portugal).  This culture approach makes it easy to distinguish between two countries that share a broad language (e.g. Spanish), but differ in usage (e.g. Spain vs. Mexico).
  • Developing an IA for a multilingual sites involves many more decisions and test cases to resolve than a single language site.  It’s important to explore the implications for your specific IA based on the languages that need to be supported and, when the visual design is complete, any challenges a specific design might pose based on the supported languages.
  • Within SharePoint, decide what variation will act as a the “primary” or source variation.  This is the variation that will syndicate content to all other variations.  For example, if the source variation is German (from Germany), your content will start in German; once a page is approved, it will be copied to the other language variations in your site collection (e.g. EN-UK, PT-PT).  From there, each non-source variation will be responsible for translating, approving and publishing a language specific version of the German content.
  • You will have at least one RESX file per variation.  If you have lots of different cultures, you will have as many RESX files and they must all contain the same labels.  Because labels are not evaluated during compile-time in Visual Studio, you’ll only discover missing (or conflicted) labels in a resource file at run time.  This is not, obviously, a good user experience.  As a result, you should thoroughly test and control the modification of RESX files.  Take a look at this blog series from Carel Lotz regarding one approach to effective RESX management: http://fromthedevtrenches.blogspot.com/2011/04/managing-net-resx-duplication-part-1.html
  • Think carefully about the taxonomy (aka organization) of variations and labels.  As much as the IA process should define navigation, the overall taxonomy will drive label names and how labels are used in the application.  For the project example in this post, we used labels tied to interfaces (interface name prepended on label name).  This is one approach.  However, this approach neglects opportunities to leverage labels across interfaces.  Conversely, label use across interfaces can make maintenance more challenging as label changes will necessarily have different impacts across the application.  Here, experimentation and testing are key.
  • A SharePoint multilingual site is really a combination of SharePoint variations and .NET globalization.  You must necessarily implement both; end users will leverage the variations component and your developers will have to provide matching RESX files for application-specific labels and static text.

As you may have surmised, there are a lot of details to consider when developing a multilingual web application.  SharePoint does provide decent facilities to enable basic multilingual sites and the .NET framework provides loads of flexibility in implementation.   Just be sure to consider the whole picture – it’s a combination of SharePoint centric constructs (aka variations), good information architecture/design and the technical “infrastructure” to make the whole solution work for end users.

16 January 2012

The [Tools are] too much with Us

As 2012 starts in earnest, I am reminded of the poem from which the title of this post has been taken “The World is too much with Us” by William Wordsworth.   In this poem, Wordsworth laments how out of tune with nature people had become during the first industrial revolution.  In much the same way, I see too much focus being placed on tools in the era of SharePoint.  Business users and Information Technology folks seem to be so enamored by the tools and technology, they forget that the focus should be on needs and solutions.  This is especially true when discussing SharePoint and, as I said many times, SharePoint is not the answer.  

Instead, SharePoint, like any technology, needs only to be included insofar as it provides the basis for creating a solution to a specific problem (or problems).  For example, if you needed to manage documents, SharePoint could provide you with a Document Library for storing the files.  Further, you could leverage Content Types and Information Management Policies to enable more precise management of a document’s lifecycle (if that were a need).  However, your specific use of these features should and must be governed by the solution – the overall set of features, functions and the specific solution implementation  in the context of your needs and goals.

When considering how to proceed with your SharePoint project, consider this one piece of advice: start with the problem or challenge first.  Ignore SharePoint and don’t speak of it again, unless you’re discussing how some feature in SharePoint can support a solution.  Even then, try focusing on the solution (give it a name if you have to) and not the tools or features involved.