Category Archives: Uncategorized

Building Components of the Right Scale

So in the last post we discussed using RTC for C/C++ Development.  With such a broad topic it’s not surprising that there is some lack in details.  We will attempt to hit some of these topics going forward.  While not always with explicit answer, but in many cases some reasonable advice based on practical application.  I’m all for discussions, so if there are dissenting points of view or items of curiosity just ping me directly, or add a comment.

What I find most compelling to discuss next is something that appears on the surface obvious but in reality it can become quite complex.  Quite simply the question is… when do I make an RTC Component and what does this component represent?  It seems obvious, if not trivial, but let’s take a moment on why I think this is both a critical, as well as non-trivial, exploratory for any organization adopting RTC for development.

So first, intuitively what do we think of as a component?  Working with many organizations, it is quickly realized that has domain specific connotations.  If I’m talking with an IT delivered entity components have scale, a known value, and when reused they are reused as a whole.  When I talk to embedded manufacturers the typical answer is along the dimensions of a known value, reused as a whole, and capturing variations (often hardware dependencies).  So what is common?  A component has a known value, has potential reusable entity, and (implied in both) has known dimension (size, interface, etc).  Seems pretty fair and not overly different, so what is the problem?  In the whole the major difference appears in scale.  While there are tremendous similarities, the expected scale of components in a typical embedded system is actually quite small.  And yet on a product basis the number of components of these systems are often significantly larger than their IT and consumer electronic counterparts.  The result of this is that their products (for example an automobile) have enormous numbers of components (in the order of thousands!).  And so while at first glance one would typically relate an RTC component to the domain definition of a component, I greatly recommend pausing for a moment and consider some other perspectives before you make that immediate leap.

The quickest way to get there is to shift gears a bit and discuss the concept of an RTC Component.  Before you get the wrong idea, I believe the RTC Component definition is great, just might not be a direct match the component definition your domain typically uses.  The simplest definition that I have been able to develop (thru usage, rather than formalism) is that the RTC Component captures a collection of file objects that must be managed as a set which represent a known value.   Doesn’t clear a whole bunch up does it?  Well consider this …. a RTC Component is the finest file set granularity that you can recreate.  That is a bit closer to a valuable definition.  So in practice RTC says that I only can formally label a component instance (identify specific versions of files and their specific versions as a retrievable set) than suddenly this has a slightly different flavor.  So this is a deliverable entity from the development organization at a specific moment in time rather than a purely technical artifact.  And in this regard it makes great sense.  Software organizations have deliverables, known value points (content and timing), which are delivered as sets of artifacts from teams with known attributes (quality, interface consistency …).  This is really what I believe and have experienced the RTC Components being good at conveying.

Okay so after that tangent, how does this relate to the domain definition of component?  And better yet your RTC usage in (C/C++, System, Java, …) development?  Actually we can bring this together quite cleanly and outline some rules (simple and logical) on how this is conveyed to an installation.  First, consider RTC Components as the macro elements.  These define team deliverable sets and the opportunity to label instances of these deliverables.  Second, we have the technical dimensions of software development that embody themselves as libraries, executables, or other domain appropriate definitions.  Fortunately for many domains these organizational structures and technical structures are exactly the same and in that case it is quite simply that one-to-one relationship.  For the rest of us however, consider the following….

At the abstract level we can consider the team deliverable and technical assets as a simple containment.  In a one-to-many form, an organizational component owns a set of domain component artifacts.  In the sense that for every deliverable of a team there are a set of artifacts that the team can enumerate out of those coherent sets.  The simplest example would be a set of interfaces and a specific set of variant components that implement those interfaces.  While as technical artifacts they clearly identifiable on their own, the team will typically only release and validate sets to manage the complexity of the compositions.

So now we have the simple relationship that an RTC Component contains a coherent set of technical artifacts delivered from a team.  Now naturally there is argument on how these should be versioned separately (and I think I’ve personally made them before 😉 ), but that is not the main point.  The main point is that it is impractical to do so (simply go to the Eclipse IDE and open a stream with 250 components and you will quickly see my point).  The practical alternative is to utilize decomposition within a RTC component to achieve the same purpose.  As an example, if within an RTC Component you fracture each of these technical components into separate Eclipse projects you increase the granularity without the clutter in the SCM system.  You also empower developers to selectively load from components, so that’s a bonus as well.  And the best part in my mind; it is simple!  Teams can get off and running quickly with little overhead.

So what is the bad part?  The major item is the loss of granularity in the baselining.  Your RTC components will appear large in the sets that you version and capture.  However, I find becomes an argument in principle rather than point.  While you can never regain the granularity, you can always perform component baseline compares and see the granularity.  Also in practice it is not always best to have all possible combinations but rather all likely combinations which is usually as significant subset.  The other practical point is it has been significantly easier to add RTC Components to a project than to merge them.   From a deployment standpoint, start with a few and grow rather than create all possibilities and later trim down.

And so why did we need to talk about this?  Well in software we are all about “building” (compiling) and when talking about development it is really difficult to talk about what I’m building until I can describe and organize the pieces and parts.  I always want to be able to recreate my builds so getting to the details of how I version sets is required.  And quite I simply, I want to keep good teams from stepping in bad pot holes with new tools.  It often feels comfortable to build everything you might need at the beginning, but often the best answer is to keep it simply but know how you could scale it.  So now we understand the role of RTC Components as the team deliverable mechanism and the usage of Eclipse projects to provide technical granularity within the team deliverable.  Such a simple solution!  It’s a standard pattern of keeping it simple and knowing how to scale!


A Quick Lesson in Using RTC for C/C++ Development

After I presented at IBM Innovate last year I was pinged several times on the usage of RTC for C/C++ development and the question, “does it really work?” I must admit I had a very similar reaction the first time I looked at RTC three years ago. What a great idea! A tool that would help me do what I wanted (visibility and configurable workflow) rather than trying to work in someone else’s vision of workflow. Not being a Java developer means some work ahead, but I knew Eclipse and was confident that it was possible.

So can RTC be used for C/C++ development? The answer is a definitive yes. While using RTC with C is not a shrink-wrapped solution from, it is not difficult to implement. Following are some simple steps and basic constraints to enable C/C++ development in a RTC installation.

Let’s first start off with why RTC is suitable and valuable to C/C++ developers. The primary reason I like the RTC environment is that it makes good software engineering practices easier to execute. Practices such as modularity, composability, measurable quality, continuous builds, and agile planning are equally important no matter what implementation technology you are using. Those simple concepts are all independent of language, but the mechanisms/transports can be different from language to language. It is these mechanism variances that must be deployed to be successful with C/C++.

The most apparent issue with using RTC with C/C++ development is the shipped IDE. It is quite noticeable that the default client for RTC is Java. However, after doing a tour of RTC you quickly realize that RTC is not bound to a particular language environment, but rather to the facilities in Eclipse. This is the needed opportunity for leveraging the vibrant Eclipse community for the development language plug-ins. The good news for C/C++ developers is that the Eclipse CDT has matured over the last 7 years and is a very competent development environment. If you are using gcc, or gcc derivative, you will be successful (mingw or cygwin if you are on Windows) . Just hit the CDT update site from your RTC client and you will now have the facilities you expected to do C/C++ development.

If you are not using gcc, all is not lost. The prevalence of Eclipse as the foundation of many custom IDEs means that your primary IDE may already be available as an Eclipse plug-in. Contact your toolchain provider to determine their compliance. If they provide an Eclipse update site, simply point your RTC client to that update site and you are off just like your CDT counterparts. If they don’t have an update site but they do have an Eclipse IDE, then you still have the option of installing RTC. While in previous releases you were encouraged to use the IBM Update Manager, you also have the option of using the old “dropins” approach (This has been done from previous 2.0 releases by simply “redeploying” the links, jazz, emf, gef directories from an RTC client installation into your C IDE and doing a forced restart) , or in the recent p2 repository for the RTC plugins is available. My preferred path is the p2 repository, but your results may vary. Every toolchain integration is slightly different, so be patient and use the techniques and forums available in the Eclipse community. With the collection of approaches I’ve always been able to get to a successful integration.

If none of the options describe your situation (and you are not using Visual Studio which has its own plug-in) you are in a challenging spot. While you can use the base CDT to do your editing and you can receive some value, there are two things that you are going to miss. First, is the error parser for identifying build errors when you perform a make build. This is a console parser that grabs the output of a build and marks files and the problem list with the errors and warnings . The second element is the debugger integration. By default the CDT is enabled by the gnu tools (gcc, gdb, …) to provide this functionality. If your tools are derivatives you might get some support, but you and your tool vendors are likely going to have to do some investment to make your developers happy. These are must have capabilities to deploy RTC and perform C/C++ development. The trend is to move to Eclipse support for most all tools, if you do not have this support communicate the importance to your vendor.

Once you have an IDE that supports your development language, compiler, and debugger you are most of the way there. The other aspect of critical concern is the modularity and composability of your source. This contributes to the collaboration patterns (language independent) and the build procedures (language and tool dependent). For C/C++ developers the most successful pattern is to align the natural modularity of the code base (static library, dynamic library, executable) to RTC components. The collaboration patterns for this source are most likely tied to your organizational structure of teams so this can be mapped to an RTC stream. By simply combining these components into a single stream (Multiple streams are very valuable, but are complexities that can be deferred until the project complexities grow)  you have your first RTC project for developing systems in C/C++. This is by far the simplest solution and drivers such as reuse, product variance, and large organizations may drive different patterns so there is no standard answer, but RTC is flexible to meet your needs.

Once you have your source organized the final step is to build your binaries in a structured, repeatable, and automated fashion. The objective is to have the developer initiated local builds and server side build engines to be able to perform the same function. This is critical for developers to have confidence in the tools and results locally and within the RTC reports. For a long time the C/C++ domain has been dominated by gnu makefiles as the specification language of building software. At this point there is little reason to change. This becomes especially true as you leverage the capabilities in the Eclipse client. The Eclipse CDT includes automatic builds and customizable targets to initiate builds. These capabilities simply reinforce the behavior that when I use the CDT, I should use make. At first glance there is a conflict with RTC because the Build Engines are dedicated to ANT.
In reality there is no real conflict and make and ANT can be used to compliment each other. First of all you are not forced to use ANT. You can implement build engines very quickly via the “command line” engine and simply spawn a “make” event in a command line shell. The process is trivial if all you are attempting to do is a “smoke test” on your build where only success or failure status of the build matters. You can even use such mechanisms to support other tools beyond make such as static analysis, metrics generation, unit test execution, or document generation. While this is a good first step, we all step beyond this quickly since only pass and fail can be determined and limited reporting can be performed beyond basic logging.

This is the motivation to take the next step to ANT. By taking this step you can wrap the make process with pre and post elements to enhance your build process with setup, reporting, and publishing. Consider now that specific tasks and contexts can be instituted prior to the build, a make build initiated unit tests evaluated successfully, documentation created, files packaged and posted, and download links created in your build report. While make is still the primary tool for the source build, ANT can be used to manage the entirety of the process and provide communication to the build event where it excels. This is the power of bringing these two capabilities together.
Once you’ve accomplished these steps you have a powerful development environment that works for your C/C++ developers and empowers the organization to deliver successful projects using the visibility and planning tools in RTC. While it does take a little bit of focused effort to augment RTC to work for C/C++ development teams, the value is available for those willing to invest. If your organization is in need of visibility and agility while developing C/C++ you should consider the capability that RTC presents.

Welcoming in a fine new year…

Finally settled in to Western New York after two weeks of packing and then unpacking.  Never realized how dependent you get to high-speed internet, but after three weeks of being disconnected we finally have the FIOS going.  I like my iPhone but it does serve as a horrible day-to-day computer, and the daily trek to Starbucks with my laptop to get an internet fix just wasn’t cutting it anymore!

Ended the year joining a great team at SODIUS.  Great move and very exciting!  Been working Systems and Software jobs  for product companies over the last 15 years and relying on these guys for their technology.  Now I’m able to bring their technologies to others to help them amplify their product development teams like I was able to do in the past.

While most of this blog will be directed to Software & Systems Engineering there will be the occasional tangents.  I’ve had the pleasure to learn and think from others blog posts and observations in the past so I’m looking forward to contributing back to the community.  So look here for the collection of observations, ideas, and the occasional “how-to”.  Stay tuned and looking forward to a great 2011!