Recent posts








    The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

    Visual Studio 2010 [formerly known as Rosario] Video’s

    A lot of video’s on Channel 9 about VS2010 and a nice thing… they us the newest bits for it. So, watching them give a nice insight in where Rosario is going.

    Architecture Day (Tuesday, September 30th):
    - Cameron Skinner: Visual Studio Team System 2010 – Architecture
    - "Top-down" design with Visual Studio Team System 2010
    vs arch

    This one is really nice, Mark guides the viewer to the creation of the demo project ‘DinnerNow’ with the us of the UML diagrams. As you can see there is a new Model Explorer [upper left corner] and what you don’t see there is also a model project type added to Visual Studio… and a lot more new things.

    vs arch1

    Drag and Drop from the model explorer..! and a lot more connections between the diagrams.

    Two months ago I made a video demo how we are using Team Architect UML for a real live project [for internal use]. I just uploaded it to YouTube, so you can see the difference between a demo project [Dinnernow] and a real life project… not that much difference :-)



    - "Bottom-up" Design with Visual Studio Team System 2010 Architect

    vs arch2

    A nice new feature in the sequence diagram…

    vs arch3

    and an example of the layered diagram

    vs arch4

    with validation enabled in the build process…

    The last one about Team Architect: ARCast.TV - Peter Provost on what’s coming for Architects in Visual Studio Team System


    The other video’s about VSTS2010 below [haven’t watched everything yet, still busy with my vacation photo’s]

    2008-09-25 Gross Morne mountain (22) IMG_5211
    Business Alignment (Wednesday, October 1st):
    - Achieving Business Alignment with Visual Studio Team System 2010
    - Agile Planning Templates in Visual Studio Team System 2010
    - Enterprise Project Management with Visual Studio Team System 2010
    - Requirements Management and Traceability with Visual Studio Team System 2010

    Software Quality (Thursday, October 2nd):
    - Better Software Quality with Visual Studio Team System 2010
    - Manual Testing with Visual Studio Team System 2010
    - Historical Debugger and Test Impact Analysis in Visual Studio Team System 2010

    Team Foundation Server (Friday, October 3rd):
    - Brian Harry: Team Foundation Server 2010
    - Branching and Merging Visualization with Team Foundation Server 2010
    - Enterprise Team Foundation Server Management with Mario Rodriguez
    - Team Foundation Server 2010 Setup and Administration
    - An early look at Team Foundation Build 2010 with Jim Lamb
    - A first look at Visual Studio Team System Web Access 2010
    - Update on Team Foundation Server Migration and Synchronization

    Posted: Oct 07 2008, 03:17 by Reijnen | Comments (2) RSS comment feed |
    Filed under:

    How to create a Visio Importer for Team Architect Rosario

    First a small video to show how it works, second some explanation how to do it…


    [recorded with camstudio and my birthday present ;-) ]

    You can download the video from my SkyDrive

    How to make one yourself…
    I think Visio isn’t the only format you want to import so here is a brief explanation how to make your own importer.

    You have to figure-out what the format is, with Visio that was pretty easy. As you probably have seen in the video I saved the diagrams as VDX “Visio Drawing XML”. So the format I need to read is XML and MSDN made it even more easy for me to understand the structure. see this page on MSDN: Microsoft Office Visio 2007 XML Schema Reference.  


    Next, create a Visual Studio Add-in [see this post “Rosario – Create Custom Team Architect UML Diagram-MenuItems ”]. I do think, that making use of the Backplane [Designer Bus] would be a better way to make this kind of functionality, for sure we don’t get the question anymore to reload the diagram after importing… but the activity diagram isn’t connected to it, maybe in a different release. Till that time an add-in is the most easiest way.

    Create an object model which gets the information from the Visio-diagram.  The VisioReader method reads the diagram and loads the model with a collections of shapes and connectors [this piece of code needs some refactoring]. As you can see I only grabbed some basic values, the connector should have Guards for example.


    Anyway, when you got the Visio diagram in memory we need to put it on the diagram…   First get the model [see below and click for a bigger picture ]. Create a store and load the model, I not only loaded the model but also the diagram [see the methode “LoadmodelandDiagram” in the SerializationhelperClass] this gives me the opportunity to position the shapes on the design surface, it even gives the possibility to change the size and colors...  maybe I should change the color of the imported shapes in red… 


    When you got the model and diagram, start a new transaction and iterate through your shapes collection…  and draw the shapes.
    Below the drawing of the finalshape. As you can see the code for the position is a bit crabby.


    and finally draw the lines, commit the transaction and save the result…



    There is a lot more work to be done to make it more professional… but everything is in place to accomplish that.

    Posted: Aug 13 2008, 15:53 by Reijnen | Comments (0) RSS comment feed |
    Filed under:

    Rosario – Import Visio documents in the diagrams.

    Had some fun last weekend and past evenings with Rosario, DSL’s and Visio.

    When you read the MSDN forums about Visio, Modeling and Tools, Team Architect and Rosario you see that there are often questions about Visio and the need for export functionality… see this search result list and these threads:

    This answers, triggered me:

    there is no way to export to Visio using the April CTP

    David is right with his answer. Visio 2007 doesn’t have an XMI export  and the April CTP doesn’t have a XMI import [yet!!??,  haven’t heard anything about it but David Starr mentions it in this post Architecture Modeling in Rosario with Peter Provost, still wondering if you can make an general XMI export/ import while every tools uses it’s own version].
    But, the Rosario UML diagrams are based on the DSL tools… so everything is possible :-)

    So, after some hours of playing… here is the Visio Import Addin. [don’t think an export is prio 1 at this moment]

    First create or open an activity diagram, right mouse click and select “Import Visio Diagram”…


    Select the diagram you want to import and look at the result…



    I know it’s a very simple diagram, but it works, it even tries to take make the same layout (I didn’t do anything with the position of the shapes after the import)..

    So, now it’s time to answer that forum post :-)
    I think it’s useful. It’s useful for us, we’ve got tons of activity diagrams in Visio...

    Anyway, time to prepare myself for a trip to Italia… will show some code after this weekend.

    Posted: Jul 30 2008, 17:37 by Reijnen | Comments (0) RSS comment feed |
    Filed under:

    Rosario – Create Custom Team Architect UML Diagram-MenuItems

    There are many ways, maybe to many, to extend Visual Studio.  Macros, AddIns, VSPackages, [see: Visual Studio Extensibility Demystified] GAT-GAX, et cetera and now with Rosario we get even more ways with new features like the Architecture Explorer [see Create your own Progression Provider post].

    So, we’ve got a lot of ways to add our own functionality. While the Architectural Explorer is more for visualizing the architecture of your applications [or binaries] it is also possible to make executable commands, not really the best place to do that. I think users will get lost when I hide my commands in there, they are used to the current structure of commandbars and mouse-menu items. We also could use GAT-GAX [did that with the first ideas around testcase generation] to add commands, works also pretty easy… [note: you can install all the possible powertools, AddIns, gat-gax and factories which work on Orcas also on the Rosario CTP12]. 


    Anyway, because I think commands should be as near as possible at the thing they act on, for the TestCaseGeneration that would be the activity diagram, I wanted the command on the activity diagrams design surface. Not really rocket science because the Team Architects UML are based on the DSL-Tools.

    First, create an normal Visual Studio AddIn project [Creating Visual Studio Add-Ins] and add the necessary code for adding an CommandBar. The only thing you need to figure out is the CommandBar  you want to add your MenuItem at… for the activity designer this is “Activity Designer Context”, for the use Case diagrams it is “UseCaseModel Context” and so on… you can easily get all the names by iterating to the CommandBar collection.


    Next, create the MenuItem handler, grab the current file and load the model [see next code snippet]. From here we can do anything with the UML diagram we want to do. For example for the testcase generation I only iterate trough the diagram [ see: foreach (ModelElement … in allElements) ], do some magic and create the WorkItems. But, you also can add ModelElments at the diagram, remove them or change properties.


    Getting the model diagram is also pretty straight forward…


    Conclusion: adding MenuItems is an easy way to add functionality to your diagrams... although I have to say that this implementation is based on the Rosario CTP12 bits and I don’t expect they stay the same while the diagrams evolve. Anyway, for now, a nice way to play with the UML diagrams.

    Posted: Jul 19 2008, 15:06 by clemens | Comments (0) RSS comment feed |
    Filed under:

    Rosario – Project Estimating with Team Architect Diagrams

    An idea [and early implementation ] of our “Enable ALM by Automation” vision within Rosario.


    While VSTS with TFS is great in measuring, time tracking, project planning and other project management kind of tasks it misses the early phase where the project team needs to estimates the project. With Rosario Team Architect this important missing piece in Application Lifecycle Management can be realized, by making an “connected” viewpoint for business estimation and measurement.

    Project Estimation.
    I think, I don’t have to talk about why there is a need for project estimation [it would be a very long post], how it’s done and what needs to be in place to do it “right” [is an estimation ever right??] is more interesting.
    The classic estimation book 'Software Estimation: Demystifying the Black Art', a must read anybody interested in software estimation, is a good start in capturing the needs for a good enough software estimation implementation. The next “deadly sins” are distilled out of this presentation “10 Deadly Sins of Software Estimation” from the author “Steve McConnell”.

    • Confusing targets with estimates
    • Saying “yes” when you really mean “no”
    • Committing to estimates too early in the cone of uncertainty
    • Assuming underestimation has a neutral impact on project results
    • Estimating in the “impossible zone”
    • Overestimating savings from new tools or methods
    • Using only one estimation technique
    • Not using estimation software
    • Not including risk impacts in estimates
    • Providing off-the-cuff estimates

    Before we can use this list and look at Rosario Team Architect and what the diagrams can mean for software estimation we have to dive in to the different estimation methodologies. Lucky me somebody already did that in this paper “An Effort Estimation by UML Points in the Early Stage of Software Development”, and the writers also point to the pain point of these methodologies:

    Common problems with these approaches are lack of early estimation, over-dependence on expert decision, and subjective measurement of each metric. A new approach is required to overcome these existing difficulties. We move upstream in the software development process to requirement analysis and design.

    pros and cons of software estimation practicess

    The missing estimation style in this table is the most interesting one for us, Use Case Points. [it’s the topic of the report, so that’s the reason it’s missing]. Use Case Points is an estimation method based on UML Use Cases. From UML Distilled:

    Use Cases are a technique for capturing the functional requirements of a system.

    And requirements is exactly what we need as bases for an estimation.


    Use-Case-Points [UCP].
    So, Use Case Points is based on Use-Cases with al it’s pros and cons. But, the way it works is pretty easy. When you have your Use-Case in place you can start counting “points” the same way as with for example Function Points. Some cases are harder to implement than others so those will have more points… easy going.

    To be more precise Use-Case Points exists of:

    • the number and complexity of the use cases in the system
    • the number and complexity of the actors on the system
    • various non-functional requirements (such as portability, performance, maintainability) that are not written as use cases
    • the environment in which the project will be developed

    and ranking:

    • Rank Actors as simple (1 point), average (2 points), or complex (3 points):
      • Simple: a machine with a programmable API
      • Average: either a human with a command line interface or a machine via some protocol (no API written)
      • Complex: a human with a GUI
    • Rank use cases as simple (5 points), average (10 points), or complex (15 points):
      • Simple: fewer than 4 key scenarios or execution paths in the UC
      • Average: 4 or more key scenarios, but fewer than 8
      • Complex: 8 or more key scenarios

    Readings about Use-Case Points:

    From a Rosario Team Architect point of view this is an interesting estimation method… because it can be automated! Not that we only should use UCP just because it can be automated, put automation of the estimation process will give us a big benefit in making estimation more mature within the organization.

    Looking at Steve McConnell’s deadly sins we can imaging that one of the important capabilities we need from estimation tooling is historical data, “Providing off-the-cuff estimates”. Without historical data estimation is useless, error prone and unpredictable, with automated estimations this can be realized. Another important pro with automation is that we are reproducible, running the estimation again will result in the same numbers, which give the historical data some more value ;-)

    TFS is a databasesystem so capturing historical data shouldn’t be a big problem.

    [Process Improvement Journey (From level 1 to Level 5) The Boeing Company]

    Deadly sins tackled:

    • Saying “yes” when you really mean “no”
    • Not using estimation software
    • Providing off-the-cuff estimates

    One other important advantage you get from automation the estimation process with Rosario and UCP is collaboration, collaboration in the early phase of the project lifecycle between business and development. Estimation is important for the business to get budget and for the project lead for the planning, while working together on the Use-Cases business can immediately see what the impacts are in terms of budget, so there will be less unnecessary and incomplete requirements.

    Drawbacks and points to look at:

    1. Use Case Granularity and Complexity…
    There is no standard in writing Use Cases. You can define very high-level cases and very low-level detailed use-cases, nobody will stop you from doing that. This is one major challenge in estimation with Use-Cases. When writing to high level cases you are in the neighbourhood of the deadly sin “Estimating in the “impossible zone” and “Committing to estimates too early in the cone of uncertainty”.

    [ The Cone of Uncertainty from http://www.construx.com/Page.aspx?hid=1648]

    For sure it’s possible to estimate with use-cases in a more early stage of the project using more higher level use cases, the “initial state”, but keep in mind that the uncertainty will be bigger in this stage. When the project evolves and more information comes available, more detailed use-cases are made, you can tune the estimation and with version control of previous estimations and use-cases you will have an mechanism to assess your previous estimation. With this assessment you can make this process of early estimation in the lifecycle more mature. Actually this learning process must be in place during the whole lifecycle, for estimation and for all the other things. [see the Boeing story]

    This problem of differences in granularity and complexity of use-cases is recognized by the industry and many people and organizations have a solution for it. For example Capgemini use Smart Use Cases which are generic use case types.


    I really like this approach of a kind of repository of UseCase stereotypes, although you must be aware that a use case is technology independent adding technology in to use cases will make the world more fuzzier.

    2. Technology/ platform independence…
    Use Case is document which describes “WHAT” our system will do at high-level and from user perspective. Use Case does not capture “HOW” the system will do. It’s impossible to make an “platform independence” estimation. Platforms, technologies, tools and languages all have impact on the speed of development. While Use-Cases, actually UML in its whole, is platform independent, estimation can’t be, so there needs to be a place in the Use-Case-Points methodology for differences in platforms, technologies, tools and languages . With UCP this is minimal done by the identification of actors.

    Actor identification need technical details: In order that the actor is classified we need to know technical details like which protocol the actor will use. So estimation can be done by technical guys.
    [How to Prepare Software Quotation]

    [ almost 4th of July :-) The signing of the Declaration of Independence 4th July 1776 ]

    Actually, you don’t want technology in Use Cases, it’s mend to be independent so we need to keep it that way. Another way to put technological knowledge in the estimation is add this information to the complete estimation. For example, most organizations already have chosen their platforms, technologies, tools and languages and Enterprise Architecture will monitor it that every project uses their guidelines. So adding a reference to these guidelines while estimating will bring technology in the estimation. Historical data will need to have this information and projects can base their estimation on this, guideline referenced, data. These Enterprise Architectural guidelines can be measured up-front and will get fine-tuned with every project. With that identifying risks when using new technologies in an early stage.

    when capturing this in with automation, you have Deadly sins tackled:

    • Committing to estimates too early in the cone of uncertainty
    • Estimating in the “impossible zone”
    • Overestimating savings from new tools or methods
    • Not including risk impacts in estimates

    Deadly sins not tackled:

    • Confusing targets with estimates [for sure don’t put the estimation in TFs as workitems, maybe something like a workitem where the project lead can upgrade them to workitems. But that’s already very very tricky ]
    • Assuming underestimation has a neutral impact on project results [don’t assume that]
    • Using only one estimation technique [with one technique automated there is time left for the other…]


    So, there has to be many capabilities in place before we can make an mature automated estimation process with Rosario Team Architect.
    But, we can start small… extend the Use-Case Diagram with additional “complexity” information [also granularity information], add an command which captures the Use-Cases and collect the points. The very near next step would be historical data with the possibility of referencing guidelines. In the future we could make a repository of Use-Case stereotypes like the smart use cases from Capgemini.

    [next post an early implementation]

    Posted: Jul 03 2008, 16:30 by clemens | Comments (1) RSS comment feed |
    Filed under:

    Rosario – Create your own Progression Provider

    What Progression Providers?
    I mean those commands in the Architecture Explorer. For example the “Insert into Active Diagram” command which generates a sequence diagram from the selected method or the “Save as XPS…”.
    And just because Visual Studio is extensible from top to bottom it should be easy to make your own commands. 



    Why would you want to create your own Progression provider?  For example, you could make your own “Import from strange format…” or “Export Model to strange format…” commands, XMI for example. I want a command “Generate TestCase…” for this solution Rosario Video - Generate TestCases from ActivityDiagram and I also thinking of changing GAX/GAT actions in progression commands, will save me some installation problems... many possibilities for the progression commands to get useful. Although the user-interface good have some usability improvements.

    The Basics…

    The providers can be found in the “C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\Providers\” folder and are loaded on runtime. No configuration and no installation is required. Just put your provider-assembly in this directory and the commands will appear in the Architecture Explorer.

    This can solve an interesting problem when you work within different teams.
    The problem with different teams and extensibility is that Visual Studio extensibility is “developer”machine focused and not project focused. For example, when you want to use an GAT package within a projectteam all of the team-members must install that package. A new team member must start his first day by installing and configuring the packages/ customizations used by that project. When you work on many different projects where each project has his own set of customizations this can be awkward, you have to maintain as many different environments as projects. With the Progression providers “assembly only” approach it will get much easier, just copy past the assemblies or when it’s possible to put them in the same directory as your project [didn’t tried this, maybe some kind of reg-key or config setting which points to the providers]. We could put them in source control [nice wane have] just a get latest will setup your development environment. That will even solve the visioning problem… nice!

    The Inspection…

    There are already five providers in the “…\PrivateAssemblies\Providers\” folder, let’s examine them with Reflector . A quick look in to the Progression namespace together with the providers tells [see image] that all of them use the Microsoft.VisualStudio.Progression.Common and .Interfaces.

    progression namespace

    What are the most important classes used by the providers… In Dependency Matrix 2 [below] you can see that the ProviderAttribute and the Transalator is used by all of them and StateMachine classes by 3 out 5. So that are the interesting classes to look at.


    The same way you can see that classes prefixed with provider most heavily use [depend on] the ProviderAtrtribute, Translator and StateMachine classes from Microsoft.VisualStudio.Progression.Common.

    So, now we know where to look let’s see what these classes do [NDepend right mouse click go to Reflector]. First the DSLProvider, and as expected there is a Provider attribute and an inheritance from Translator. The Translator class has some WPF methods DependencyProperties and some abstractmethods we need to implement. 


    The Translator method Added runs immediately when the Architecture Explorer starts and the method BecomeActive is called by some kind of interval [still don’t know where it is triggered]. The Tick method is fired by System.Windows.Threading.DispatcherTimer.FireTick and the other methods have logical names.


    The only class left to analyse is the StateMachine class and it does exactly what the name say’s. It’s a statemachine, with a little bit of knowledge of the State Pattern is this class easy to understand [you can find in this issie of the MSDN magazine some background, this article “Design Patterns: Solidify Your C# Application Architecture with Design Patterns”].

    A state has an enter, exit and update delegate. So, these are methods which are executed at the moment of state change. A statemachine can have different states, you create them with the CreateState method. this method saves the different states in a Dictionary. for example when Enter on the statemachine is processed it fires the StateEnter delegate. You also can add events which helps by the transition between states.


    Enough analysing… let’s make our own Progression Provider.


    The Implementation…

    1. Start a new Class Library project.
    2. Add the Provider attribute and the Translator, IDisposable inheritance.
    3. Add a constructor where we initialise a new statemachine, create states and events.
      This piece of code, initialise a new statemachine, create two states which both have only a StateUpdate delegate. [I have to play with these different kind of delegates, still don’t know what’s the best way…]. The two events both put the statemachine in idle state.
    4. Implement the added method, we register the action/command so it will be visible in the Architecture Explorer and put the statemachine in Idle state.[You can really do a lot with the RegisterAction method, I think this is the most minimal way to use it]
    5. BecomeActive is checked constantly with some kind of interval and I had some challenges with this. What resulted in an empty Architecture Explorer and many errors. Anyway, when everything works it enters the ActionHandling state. That one looks in the pendingactions dictionary for the command he wants to execute, mine is called 0xbb6. 
      After this complete the rest of the methods and do what ever you want…
    6. Finally, copy past the created assembly to the “…\PrivateAssemblies\Providers\” directory start Visual Studio and have fun with YourFirstProgressionCommand…
      This one really does magic… it say’s something like “Hello….” in a windowsform.

    Some final words…

    This is really a “Hello World” implementation of a progression command, there are many area’s for investigation left… but, it works and I think there will follow many more commands.

    Posted: Jul 01 2008, 08:08 by clemens | Comments (0) RSS comment feed |
    Filed under:

    My CodeGeneration Presentation

    Instead of just publisching the deck, I made a kind of storyboard, with links for further reading, of my session… 



    Introduction, see the About Me page, still have to update this.

    Beside introduce my self also setting the stage. TAP project, this project: “Public Case Study Visual Studio 2008 Team Architect and Software Factories ” ] and the focus during this session “I talked more about the technical details around the DSL implementations than the real project”.

    The structure of the session: The ideas, tools, solutions, lessons and what’s next.


    The ideas, as always.. Enabling ALM with Automation.
    Introducing [Microsoft] Application Lifecycle Management, ALM Definitions
    Viewpoint Models -Small models helping different stakeholders to work together within their own languages and usefull view on the application[s]. Not only generating code but also other artifacts necessarily for the application [for example confg items] and for the different stakeholders to do their job better [for example testcases]…
    It’s all about Viewpoints.. DSI – ALM – DFO -SDM and Communication..!


    The Tools, with first some background on Microsoft’s modeling strategy DSI and where Team Architect fits.
    See this post: DSI, OSLO and Models in the Lifecycle. Get Prepared..!


    A walkthrough the default functionality of Team Architect 2008, the diagrams.
    A Visual Studio Team Edition for Software Architects Orcas Project Walkthrough

    How Do I- Use the Retooled System Designer in Visual Studio Team System 2008 Architecture Editio


    The pros and cons in using Team Architect 2008…
    Pros: Top-Down design [Top-Down System Design by Delegating Behavior] and deployment validation
    Cons: Implement Application
    (haven’t post much about this… strange, will do for Rosario ;-)


    Change the implement application feature with the service factory:
    Service Factory @ Application Designer

    The Greenfield
    I still like this example: Amazon Services, the Service Model.


    Challenges in the implementation of Service Factory @ Application Designer:
    Mainly who is responsible for which data and how to re-generate lower level models:
    Service Factory Modeling Edition Extensibility, Regenerate Service Model
    How To Fire a Guidance Package Recipe from the Implement Application Feature of Team Architect
    How To Collect Data from the Application Diagram to create WSSF Service Models


    There is more knowledge in the diagram which we can use… for example, we could generate service agents.
    Model Service Agents with the Service Factory


    I still like the idea to call it “Lifecycle Portfolio Management” [something like that] instead of ALM, just because we [or customers] are not developing just one application, but a large amount of applications, components, services and for sure legacy systems. Visual Studio is hard to use when modeling this large environments. Putting all the projects in one solution isn’t an option. Their needs to something like solution linking, responsibility between different solutions, references… anyway a workaround is this:
    Autonomous Develop Services for SOA Projects with Team Architect and Service Factory


    While the service consumer and service provider modeling solutions are lower level, we generate them from the higher level Application Diagram, this solution is at the same level as the application diagram… the security model.
    Creating Secure Services, with Visual Studio Team Architect and the Web Service Software Factory
    see also this post for the Design For Operations ideas: It’s all about Viewpoints.. DSI – ALM – DFO -SDM and Communication..!

    Dia45 lessons, lessons and more… lessons

    After some discussion the session continued with some Rosario things…


    the UML Diagrams
    UML, the Most Wanted Feature in Team Architect
    Rosario - Create Sequence Diagram from Existing Code
    Rosario - Create Sequence Diagram from Binary

    Beside this a little bit about the backplane and progresion…


    Conceptual design upgrade it to physical design and code. In this example Logical Class Diagram for some idea gathering and when you find something interesting enough “upgrade” it to physical level…
    I like this idea: start drawing ideas on the surface with the business guy [classes, sequences] and upgrade it when it’s good enough…  
    Anyway, there are more interesting scenarios also with DSL’s using a hybrid approach that combines UML and DSL’s
    See this post from Steve Cook: I’ve got a new job working on DSLs and UML! and from Cameron Skinner DSL+UML = Pragmatic Modeling and this article UML versus Domain-Specific Languages article in Methods & Tools


    Beside using UML with DSL you also can use UML with some kind of methodology… for example the “Tmap Process Cycle Test Approach” to generate TestCases…
    Testing in the Lifecycle [ALM]... a focus on automation
    Rosario Video - Generate TestCases from ActivityDiagram

    More to come on this one…
    Dia55 TechEd News - UML on tap in Oslo SOA modeler
    Brian Harry shows some new Team Arch models on TechEd
    TechEd Keynote Video

    It’s all about Viewpoints.. DSI – ALM – DFO -SDM and Communication..!

    If you are still interested in the complet deck [I left some slides out of it] after reading this storyboad… put a comment and I will send it to you.

    Posted: Jun 28 2008, 10:04 by clemens | Comments (0) RSS comment feed |
    Filed under:

    At Cambridge for Code Generation 2008


    Just arrived in beautiful Cambridge for three days of talking about “Code Generation”. The main players in the field of model driven development will be present at this event. So, it’s going to be interesting.

    I attended this event past year and really liked it. For those who don’t follow this blog that long. There are some pictures on flickr and I also recorded these video’s [below] during the panel discussions [bad quality, but I think you will get the idea]

    [Panel members were Tony Clark (Xactium), Steve Cook (Microsoft), Matthew Fowler (NT/e), Allan Kennedy (Kennedy Carter) and Juha-Pekka Tolvanen (Metacase).]

    On the code generation site you can find the full MP3’s of these discussions… 
    UML vs. Domain-Specific? [MP3] and the future of code generation… [MP3]

    Anyway, have to continue with the preparation of my session.

    I decided to show some Rosario stuff “Testcase Generation from Activity Diagram”. Although it isn’t CODE-generation, I hope the attendees are going to like it.

    This is going to be the first slide… :-)


    Posted: Jun 24 2008, 10:44 by clemens | Comments (1) RSS comment feed |
    Filed under:

    TechEd Keynote Video

    No need to go to conferences anymore… ;-) here is the video [asx]


    Team Archiitect with Brian starts at minute 43.

    As I wrote in this post: “I'm curious what he really said…” I can say now, after I watched the complete keynote… he did’nt talk about UML in relation to Oslo. Not that important, just want to be sure…

    small update from RedmondDeveloper

    In a the Q&A portion, a developer asked Gates about the UML modeling standards in Visual Studio. He said in part: "We'll have additional support for UML in Visual Studio 10 for the specific modeling tools that are there. Then as we move forward and take the modeling platform to the next layer, we'll get even more ability for you to create your own models.

    Posted: Jun 06 2008, 05:01 by clemens | Comments (1) RSS comment feed |
    Filed under:

    Brian Harry shows some new Team Arch models on TechEd

    The layered diagram and new visualization on the Architecture Explorer.


    Just take a look this is his post: TechEd 2008 Keynote

    Posted: Jun 05 2008, 17:34 by clemens | Comments (0) RSS comment feed |
    Filed under: