Rosario - Create Sequence Diagram from Existing Code
Located on beautiful Orcas Island in Washington State's San Juan Islands, Rosario Resort & Spa has long been a favorite destination for travelers seeking relaxation and unparalleled beauty in the Pacific Northwest.
Anyway, lets look at some other kind of "relaxation and unparalleled beauty", create Sequence Diagram from Existing Code.
The scenario this feature supports is the same as the physical class diagram and the Architectural Explorer, help people understand and evaluating a implementation / architecture of existing code by visualizing it.
The dependency matrix visualizes dependencies between artifacts, the physical class diagram visualizes the static relations between objects and a sequence diagram visualizes its dynamic behavior.
How does it work?
The Architecture Explorer has set of commands, with one of it "Insert into Active Diagram". This command creates a sequence diagram from the method you have selected [see image]. Nice thing to know, this set of commands is extensible..! more on that in the future.
After adding an empty sequence diagram and selecting the command, it creates indeed the sequence diagram [image below]. I used the "drag and drop" front-end implementation, the same one I used for the dependency matrix post and the code metrics post.
This diagram doesn't show return types and parameters. For example the Lookup method look like this in code:
employee = employees.Lookup(txtFirstName.Text, txtLastName.Text);
Not that exciting, but it would help in understanding the code. Something like this...
[Always wanted to do this, use my tablet to sketch UML diagrams. Probably the last time... took me half an hour to draw this]
Another process to get there...
The physical class diagram [one to one representation of the code, see image below] has a new menuitem "Add to Logical Class Diagram...".
When putting all the necessary classes on the Logical Class Diagram. You can create a LifeLine for it... see menuitem "Create Lifeline..."
From there we can draw the interaction between those classes... and now we've got access to the operations in the classes [see dialog].
You can see that there are connected to the logical class diagram by the little "shortcut" icon on the classes.
What actually would be an interesting scenario... generate sequence diagram, generate physical class diagram, add these classes to the logical class diagram... and let these two logical diagrams work together.
More to come...
Anyway, this "drag and drop" implementation isn't that suitable to demonstrate the capabilities of the "generate sequence diagram" feature... So, next post the more sophisticated implementations... for now, looks good and time to get prepared for Vicky's birthday and Queensday ...
Live Mesh, The Emperor's New Groove...
Just watch Walt Disney's Dumbo with Abel and in the previews there was this...
I had to think about Ray Ozzie and Live Mesh...
I'm not the only one when you read the comments in this post "Ray Ozzie delivers with Live Mesh" from Scobleizer:
Mesh is Groove scaled up
I agree, at first look this looks like something related to Groove. I always liked the underlying Groove platform although the top layer ruined it. This could end up being a great transport for applications that are usually but not always online. Really hoping they deliver on the Mac and device support!
Groovy. I can’t wait to see if Ray has the wherewithall to start cutting some of the MS closetful of assets and let the winners like this shine through. What of Sharepoint integration, Skydrive, Groove, how they fit? Great to see MS play their integration card more seriously these days. Anxious to see it unfold, I’m signing up.
etc etc etc... had to make this post
Rosario - Architecture Explorer's Dependency Matrix
Pretty graphs... what do they mean?
A quote from: Using Dependency Models to Manage Complex Software Architecture.[PDF]
An approach to managing the architecture of large software systems is presented. Dependencies are extracted from the code by a conventional static analysis, and shown in a tabular form known as the ‘Dependency Structure Matrix’ (DSM). A variety of algorithms are available to help organize the matrix in a form that reflects the architecture and highlights patterns and problematic dependencies.
[Neeraj Sangal, Ev Jordan Lattix, Inc. and Vineet Sinha, Daniel Jackson Massachusetts Institute of Technology.]
How must I read them?
Actually very simple, see figure 1:
Image from: Evolution Analysis of Large-Scale Software Systems Using Design Structure Matrices & Design Rule Theory [PDF]
Why should I care?
You should care about dependencies..! as an architect and as an developer. Dependencies can make code difficult to read and maintain, dependencies can introduce bugs, dependencies and how they're structured are the key to how well an app holds up as it changes over time. [Code-Dependency Analysis]
If the tasks are tightly coupled, with many cyclic dependencies, the pipeline will stall frequently, and tasks will need to be repeated because of dependencies on tasks that follow them.
Within a previous post I used a very simple UI app to demonstrate the code metrics from VS2008. Let's see how those Dependency Matrixes look like.
First implementation: Visual Studio Drag and Drop.
We can see, EmployeeForm [user-interface] has a dependency on Employees and Employee [Business layer] and Employees has a dependency with Employee. The last one is obvious, the first one is not good. When we need to change Employees or Employee we also have to change the user-interface. This doesn't look like a big impact, but this is a very very small application, can you imaging the amount of work when there are hundreds of forms.
The second implementation didn't changed anything to our dependency matrix. We removed the generated events and let them point the same methods. We introduced some reuse in the events but didn't changed anything at the dependencies.
In the third implementation we introduced some kind of controller and although it looked promising in the code metrics table in turned out dramatic in the dependency matrix.
Why? the EmployeesController has a dependency to EmployeeForm, Employees and Employee [not good] but also the EmployeeForm has a dependency to the EmployeesController. So, with this implementation we introduced a cyclic dependency.
Assemblies that have cyclic dependencies are typically more difficult to unit test, maintain and understand. Cyclic dependencies make it more difficult to predict what the effects of changes in one assembly are on the rest of the system.
See also the quote in the "Why should I care?" paragraph.
Fourth implementation "adapter pattern together with the command pattern and observer". A lot of classes, a lot of colored boxes... but when you study on it, it looks good. The EmployeeForm only depends on commands and adapters, adapter only on commands and the controller only on employee and employees.
The last implementation, where we introduced some interfaces looks even more promising. [take a look and figured it out]
Dependency matrixes a great for code review, refactoring, analyze, design review and to see the evolution of your software. So, a must in every application lifecycle.
There are already "dependency"tools available, NDepend and Lattix for example which do a great job. I'm curious if this tool will get to them same level of maturity.
SaaS, Offline SaaS, S+S does it offers what we need..?
I wanted to post this blog for a while now, as a follow up for the previous one about S+S and SaaS. Actually I was thinking about skipping it, just because it got too long and I did wrote it as an exercise for some other work. Anyway, with the release / announcement of Live Mesh the topic is news everywhere so I decided to post it anyway… happy reading, it’s really long ;-)
Everybody works together in a seamless collaborative way and is able to work from every place in the world, unleashing the creativity and innovation of the individual and the crowd [Collective Intelligence], all this enabled by Cloud Computing, S+S, SaaS, Web2.0 and SOA technologies.
[ Picture taken last week on the way to Livigno, I had to stop to take this one ]
Back to the Ground Level…
What kind of capabilities do products need to support enterprises with this ambition?
I want to work anytime anywhere and on anything. So, I need access to the sources which I need to do my job, the right information, great tools and the workflow I have to follow must be available. The main question is: "what kind of work do I have to do...". The answer to this question would give a direction to the capabilities I need from these tools and platforms and with that I can decide if SaaS, offline SaaS or Software plus Services would fit my needs.
For example, I'm a writer, working on a book. [to start small] I can use every kind of tool which gives me the ability to type my prose [Notepad, Office Words, Google Docs, Zoho writer, Live Writer, WordPerfect, WordPad...] and save it to any location where I can access it. Web-, local-, mobile- or home-storage doesn't matter [ Hard drive, USB, Cell Phone, Intranet, SkyDrive, Sharepoint, Google Sites, Office Online, etc... ]. The most important thing, I must be able to access it wherever I am and I must be able to type my prose anytime, anywhere.
Two capabilities I need as a single writer are:
Beside this I need some kind of input device which supports these capabilities. This can be a typewriter [it offers also typing support and accessible storage]. But we live in a digital world, so this would probably be a personal computer or when I'm on the road a laptop, a Ultra-Mobile PC, a Smart-Phone or when I don't own a device I can use public devices [ not that realistic, I don't think I would start working when I don't own an input device ].
Anyway, to give me the ability to work everywhere the two capabilities "typing support" and "accessible storage" should be device independent. When I'm home I use my PC, when I'm traveling I can use my laptop and when I'm visiting friends I can use their device...
Another capability I need from the tools as a "type anytime-anywhere" writer:
It's not reasonable to write a book on a cell phone. Although, sometimes I think my wife does it when text-messaging friends.
We have to downgrade this "device independent" requirement. Something like... the device shouldn't be more than 75% less productive then working on a Personal Computer [ 100% ]. For example, working on laptops is a little bit more difficult than PC's, the keyboard and mouse are less easy to handle and the screen is smaller. So laptops are at 95%. Cell phones are at 5% of the productivity rate, although Millennials, are text messaging on small phones typing 500 words a minute with their thumbs [ Amplify the Impact of Your People with Enterprise 2.0 Technologies ]. Smart phone a little bit higher 15% and UMPC's are around 75%.
So the capability that it must run on any device should be :
- device independent, till 75% productivity lost
For laptops and any other mobile device, another important "productivity" factor is the environment you are working. You're mobile so you can work everywhere and not every environment is that productive. [ This is your Anti-Productivity Pod by Coding Horror].
There are situations that you only want to read on those 75% less productivity devices, but that's not the writer-scenario we are talking about.
So far, so good... SaaS with Offline capabilities like Google Docs with Google Gears would fit in this stand-alone writer scenario and Live Mesh [S+S] would fit also.
Publisher, the collaboration starts...
The book continues and I found a publisher who wants to publish my book.
The publisher needs my document or documents to print it, review it and edit it for publishing. So it has to be in a format his tools understand or should be able to convert it. Beside this common format, I don't want the publisher to edit my book before it's ready for review.
So, we got one extra need for the storage and one for the way it's stored:
- Permission levels on the storage
- stored in a common format
The way the publisher gets access to the documents is a bit more complex. In the pre-digital century, writers would bring the whole book in paper format to the publisher or would send it by courier, both where time consuming and in the current information age the content would probably be outdated at the moment it arrives.
Sending a mobile-storage [ USB, CD, External Hard-Drive] by DHL's same Day delivery service is faster but gives constrains to the "anywhere" rule of my typing experience, I need to be in a place where a DHL employee can pick up my package and it still would cost a day to deliver my book to the publisher. not good for reviewing and editing where we probably send the documents serval times.
I could send the files by email or during an instant messaging session [near real-time]. But both systems aren't designed for sending huge amount of data and more important those systems aren't designed for the reviewing and editing process a book needs.
A piece of the communication decision tree made by Dave Pollard.
We're not asking a straightforward question, we want review and editing. Probably several times the book is reviewed and edited by me and the publisher.
When using email for this kind of processes "sending different versions of the same data to different people who can change that data" will end up in phone calls, conference calls and arguing about who has got the most recent version and if all the changes the other made are in that version. It will end in a drama, a great topic for a "Stephen King"-thriller.
Bizarre tales of dark doing and unthinkable acts from the twilight regions where horror and madness take on errie, unearthly forms…
Some other interesting papers according to email and what kind of tasks it's designed and used for:
Email is one of the most successful computer applications yet devised. Our empirical data show however, that although email was originally designed as a communications application, it is now being used for additional functions, that it was not designed for, such as task management and personal archiving . We call this email overload. We demonstrate that email overload creates problems for personal information management: users often have cluttered inboxes containing hundreds of messages, including outstanding tasks, partially read documents and conversational threads. Furthermore, user attempts to rationalise their inboxes by filing are often unsuccessful, with the consequence that important messages get overlooked, or "lost" in archives.
Anyway, what kind of capabilities does this kind of review, editing collaboration needs? First of all the communication should be possible in an asynchronous way, I'm not going to read and discus by phone or in person every sentences.
- Asynchronous communication.
The storage of the documents must have version control or the tool itself must have an embedded version control. Tracking changes made by participants and revert a document to a previous revision is an important need for collaboration systems which are used for reviewing and editing of the same data. Changes made by others can conflict with my book idea and I want to see what changes other have made.
Also I and the publisher want to keep up to date about changes the other made. With email you get a notification with the document attached, the storage should also need to send notifications when changes are made.
Beside these requirements I still need the ability to work disconnected from the storage I share. So, there are two kinds of storage's the one I share and use when I'm connected and the one I use locally when I'm not able or don't want to use the shared storage.
Working with different storage's with different people on the same data gives some challenges according to synchronization. What happens when two participants are working offline at the same document? Or what happens when I'm uploading an old version of a document?
The everlasting question “Do I currently work with the most up-to-date data..?” The publisher wants to know this because he doesn’t want to review deprecated chapters and I want to know if he already reviewed a scene where I want to change something. This is a challenging problem from a technologic point of view and can only be solved with arrangements according to the state of documents. For example I finished a chapter and sets its state to “ready for review” the publisher gets an notification and reviews that chapter and sets it’s state to “reviewed”.
- Process Flow, documents should have a state
We can go on with these capabilities for a while, but I think we got the most important ones. Let’s focus on some other things.
I didn’t focus on writing tool support and the features such a tooling needs. In my opinion it’s a personal choice, a personal flavor what works best for you. I often use notepad at the start of an article and switch to Live Writer when I’m almost finished. Other people use Words for it… so the features the word-processing tools got aren’t that important for this scenario.
Actually one feature every word-processing tool needs to support, beside typing support, is the ability to copy and paste data from within the tool and from outside the tool. I search for information and save it, I type some notes at different locations. I don't want to recreate those.
Writing tool capability:
With the tool use we also should make a distinction with a productivity index just like we did with the devices. But I can’t think of a tool which would be the 100% reference tool. So, we forget that one.
Capability Need vs. Concept Implementation.
Let’s take a look at the different implementations of the concepts [S+S, SaaS, Offline SaaS] and the capabilities I need as a writer together with the collaboration with my publisher.
I made this table with some notes in it... I think there are a lot of discussion points and I had to guess some features for Live mesh because I'm not invited to use it.
I just used a very simple scenario [business pattern] to discuss the basic capabilities I and my publisher need to do our work. It would get more and more interesting when we would start looking at custom enterprise applications, mashups with bigger and richer collaboration needs… but let’s keep it with this list for now.
Actually, we can make the conclusion that not one of the “currently implemented” concepts [SaaS, Offline SaaS and S+S] for wordprocessing and a little bit of collaboration offers all the features we need. Sharepoint is the best, with Groove attached to it, it even offers synchronization. But we can discus if that is an S+S solution [I don’t think so].
Anyway, with the typewriter and paper we only could mark the first two capabilities green [typing support and accessible storage]. So we’re making progress…
UML, the Most Wanted Feature in Team Architect...
One of the problems I have found in VSTS 2005 is the lack of modeling tools for the Software Architect.
[Modeling and Tools Forum, UML/Modeling Tools you are using? Replies: 20 ]
I am in a looot of problem..I want to make sequence diagram from an existing C# code..
[Modeling and Tools Forum, how to create sequence diagram from c#, Replies: 15]
First, I agree with your observation, Tad, about the lack of support for the Software Architect in VSTS 2005. We did indeed focus on the System Architect not the Software Architect. We had limited resources to allocate to architecture tools, and felt it was more important to support Microsoft’s drive toward connected systems for the 2005 release.
[Jack Greenfield's Blog]
Those quotes aren't a overwhelming proof that this is the most wanted feature set. But, you know everybody wants it.
The current Rosario CTP release of Team Architect [download] supports UML diagrams..!
I attended the Rosario Architecture Edition preview yesterday. Frankly, I'm two orders of magnitude more excited about this than I am about anything else I've seen here at the summit yet.
[Rosario rocks (Architecture edition).]
In the 2005 release there already was the ability to use class diagrams. These are tightly coupled to source files and provide roundtripping. Useful, but not for UML sketching. There is added a Logical Class Diagram to Team Architect which provides the ability to make your design and "upgrade" classes to the physical layer.
Also a capability of the Logical Class Diagram is to create lifelines for those classes in a sequence diagram. Sequence Diagrams where already added in the November CTP release of Rosario. In that release there was the ability to draw the sequence between distributed applications from the application diagram, that feature didn't make it in this release [probably I'm going to write something about that later on...].
As you can create lifelines from the logical class diagram you can create classes from lifelines.
Other diagrams currently supported by Team Architect:
Use Case Diagram
And the Activity Diagram
What everybody immediately thinks, although I did...
One, this UML tooling isn't that mature then other UML tooling already ages available and evolved overtime. What is the added value? in my opinion the real added value is TFS, create workitems, track down [or up] design artifacts to requirements or code and for sure "Enable ALM by Automation". With these diagrams we can stitch everything together. For example, we already managed to generate test cases from activity diagrams and import them into Camano. [Rob is going to blog about that soon and I wrote something about that a few weeks ago].
All the diagrams are based on the DSL tools, so customization is easy and with the designer bus [backplane] underneath all the diagrams collaboration between them is guaranted.
Two, what happened with the distributed designers? they are still there... but I haven't noticed any further development on them. So, are they still useful? yes, I think... not sure, but I wrote last week something about OSLO / DSI and how to get prepared, use those diagrams to get prepared! probably something will end up in some kind of Oslo tooling (together with the service factory).
Three, do the functional designers have to use Visual Studio, they won't do that? hopefully there will be something available like Camano, the standalone testtool for testers [also in the April release]. Should be possible with the Visual Studio Shell.
Four, What about Domain Specific Languages and Software Factories? That is a hard one, lets say for now [I'll get back on this topic later on...] all the UML diagrams are "Logical" so they are for: Sketching, White boarding, Documentation, Conceptual drawings that do not directly relate to code... [see,Team System Modeling Strategy and FAQ, What About UML?].
Still enough questions, first lets start playing with it and till now... great job [I didn't know you can great these kind of functionality with the DSL tools]!
< more to come >
UPGRADE, I forgot to mention... you can create sequence diagrams from excising code [I already posted that at the forum ]
"why we need offline capabilities" deprecated
To reasoning "why we need offline capabilities", I often used the "middle of nowhere" example. This one is deprecated today.
I got a message from friends, who immigrated to Zambia to start working in Kasanka National Park, that they uploaded this video to YouTube [Filmpje] while they where counting 30.000 animals in the middle of nowhere...
Now I know this... it's going to be time to bring them a visit
DSI, OSLO and Models in the Lifecycle. Get Prepared..!
OSLO vs DSI vNext.
There is lot of buzz around Oslo and it looks like all the ideas around this concept are completely new. But when you take a look at the vision behind Oslo it's not that new, it's acutely the next step to maturity of Microsoft's Dynamic Systems Imitative [DSI] from a few years ago.
What is OSLO?
Making a new class of model-driven and service-enabled applications mainstream.
Deliver a world class and mainstream modeling platform that helps the roles of IT collaborate and enables better integration between IT and the business. The modeling platform enables higher level descriptions, so called declarative descriptions, of the application.
Ron Jacobs talks about Oslo in this video...
[always interesting to listen to Ron Jacobs but from minute 14 it gets interesting]
Key points of Oslo are:
- Models (Making models a mainstream part)
- Services (Extending services from the client to the cloud -- S+S).
- Integration (Limit the boundaries between Business and IT and within IT departments)
An important part of the vision is that the models exist in the whole lifecycle. So, they not only exist during analyses and design.
Another idea is that all the different models are connected. So, all the different viewpoints [operations, security, application, environment, etc] stay in sync. enable integration. "Enable ALM by Automation" [I talked about this in some previous posts]
Beside this application lifecycle management support with models, there is a focus on S+S application types. Products launched with this concept in mind are also counted under the Oslo umbrella and should also support this modeling vision. For example Biztalk Services [ a must visit link Biztalk Labs ] and the Internet Service Bus.
Some Oslo links:
What is DSI?
DSI is a vision from around 2003, ages ago...
Microsoft has established the Dynamic Systems Initiative (DSI) to build software solutions that facilitate the movement to the Dynamic stage. DSI describes a vision where IT systems become self-aware and self-managing. From a core technology perspective, DSI is about building software that enables knowledge of an IT system to be created, modified, transferred, and operated on throughout the life cycle of that system. These core principles—knowledge, models, and life cycle—are the keys in addressing the complexity and manageability challenges that IT organizations face today.
Key points of DSI are:
building software that enables knowledge of an IT system to be created, modified, transferred, and operated on throughout the life cycle of that system.
System Definition Model (SDM) provides a common language, or meta-model, that is used to create models that capture the organizational knowledge relevant to entire distributed systems.
- Life cycle
Business, Development and Operations by providing integration between the various tools used and activities performed within each of these capabilities.
Some DSI links:
What do have OSLO and DSI in common?
Models in the Lifecycle..!
The Products... [DSI]
Visual Studio 2005 Team Edition for Architects was the first product with designers/ models. The VSTA designers [application diagram, logical datacenter diagram, deployment diagram] where the first implementation of DSL's [Domain Specific Languages] with SDM as language.
The Dynamic Systems Initiative (DSI) is a commitment from Microsoft and its partners to help IT teams capture and use knowledge to design more manageable systems and automate ongoing operations, resulting in reduced costs and more time to proactively focus on what is most important to the organization. The System Definition Model (SDM) is a key technology component of the DSI product roadmap that provides a common language, or meta-model, that is used to create models that capture the organizational knowledge relevant to entire distributed systems.
Quote from System Definition Model Overview White Paper.
SMS and MOM are the other products which supported SDM. SDM later evolved to SML [SML Insight blog].
The model-based management functionality in Windows Server 2008 is based on Microsoft's System Definition Model (SDM) version3, which provided the basis for the Service Modeling Language (SML) proposal and submission to the World-Wide Web Consortium SML Working Group.
SCCM2007 SCOM2007 and Windows Server 2008 are also based on SML.
And there are a lot more products with models in it now days, all of them stand alone. During the past Orcas TAP program I worked on some ideas to connect those models.
The products... [Oslo]
Visual Studio "10" is one of the products that's going to support the Oslo vision [see image from Ron Jacobs video].
What can we see in the recent released Rosario April CTP about this? Not much... although, we can see a shift in focus in the architecture edition [more models] and we can see an investments in the modeling tools [designer bus]. We have to wait for other releases to get more implementation details... a visit to Biztalk Labs is interesting to get some ideas around the "Cloud"Services [Identity Services, Connectivity Services and The BizTalk Labs SDK].
Prepare for Oslo.
Not much news around "Oslo"products, but this doesn't have to mean that we have to sit down and wait. The mind-switch, the internal culture are more challenging then the adoption of new development products.
First, developers, operational managers, the business and everybody else involved in software development must starting work together, for example developers and testers [Collaboration between Test and Dev.: Rob Kuijt talks about this, Collaboration between dev. security manager and architects: Creating Secure Services, with Visual Studio Team Architect and the Web Service Software Factory and operational manager]. Seems like an open-door but more challenging then it looks like, because it's mostly the internal culture how people collaborate. This collaboration should first be supported by processes, within Oslo its going to be supported by models.
Second, start working with models there are already a bunch of models available. The mind-switch it takes is big. Architects can start working with Team Architect, developers with the Web Service Software Factory Modeling Edition and other, analysts with CTP 12 UML diagrams, operational managers with SCOM etc etc etc... get experiences. Give some control away to the models...
Third, start take a look at Software + Services / SaaS. Take services from The Cloud go look what you need. For example SLA's are interesting and what kind of capabilities do you need from those cloud services? [ Software plus Services [S+S] vs Software as a Services [SaaS] The Battle ]
Anyway, its an interesting time...
Just before my traveling time they made this small video about our case study in Belgium.
Strange to look at this while I'm already deep-diving in Rosario... and also again very strange to see my self on screen talking.
Anyway, take a good look.. you don't see me that often with a tie