ProSymmetry Recognized in the 2022 Gartner Magic Quadrant for Adaptive Project Management and Reporting

[On Demand Webinar] Portfolio Risk Mitigation Using Resource Driven Predictive Analytics

Thu, Aug 5, 2021

Join us for a discussion with Sean Pales, Founder and President of ProSymmetry, where you will:

• Understand how to identify near, mid and long-term risk to your project portfolio and related strategies through the lens of resource management
• Learn techniques to better forecast and manage resources regardless of organizational maturity
• Walk through real-world examples of resource-driven predictive analytics
• Utilize advanced visualization to become a better communicator to the c-suite

Transcript: Portfolio Risk Mitigation Using Resource Driven Predictive Analytics

Greg:
Good morning/afternoon wherever you are. I’d like to introduce Sean Pales who will be our presenter today. If any questions come up during this session today, go ahead and type them in the questions box there at GoToMeeting and we’ll make sure that we get to them at the end of the session or throughout the session. Also we’ll be sending a copy of the presentation and also the video after the session and if you have any questions at the end, we’ll tell you how to get ahold of us. With that I’d like to introduce Sean. Take it away!

Sean:
My name is Sean Pales and I’ll be taking you through today’s presentation. Our focus today will be leveraging resource portfolio management data for risk management. More specifically, how to utilize resource management data to identify risks to your project portfolio before they arise; hence the “risk avoidance” in the title. As part of today’s presentation I’ll be using a series of slides along with our product, Tempus Resource as well as our upcoming Tempus Inside Plus product, which includes a range of advance reporting, report delivery, report alerting, annotation and some data science features. So let’s get started.

One more point to make everybody aware of before we get started. The methods I’ll show today should be at the front of your portfolio management and your strategy execution processes and ideally in front of leadership. They do not replace risk mitigation and risk management processes that you have in place today that is typically run by the folks handling the execution, the project managers and others tasked with those activities. So this is going to sit on the front end to help us hopefully avoid many of those risks that we’re so accustomed to unfortunately running into.

We’ll start with a few slides and then we’ll get into some more specifics on implementation. So to be very clear about the purpose of the presentation, our goal is to put in place a system for early warning to identify risks to the portfolio and really the business before they come about by using tools, techniques, and some sophisticated algorithmic approaches, machine learning, etc. by using both static resource management data as well as dynamic resource management data. We want to identify portfolio risk early so we can avoid. And we’ll talk about these things in much more detail. What is a static predictive analytic? What is a dynamic? What makes up those analytics? What do we consider as part of our models when evaluating those levels of risk? So we’ll get into all of that stuff.

Just by being here, odds are you’re pretty interested in this topic, but in any case I always like to start these things by discussing why anyone should care. Also I expect most practitioners to say either out loud or in their heads, “I’m already doing risk management.” Or “My organization already has a risk management process.” So again, why should I, why should you care about this? There’s lots of reasons, obviously, but I’ve got a handful of them here that I think are quite poignant.

First, today’s approaches are essentially reactive methods. The risk has already occurred and you’re not working to avoid; your goal is to mitigate. Mitigation is the act of reducing the severity, the seriousness or the painfulness of something. At least that’s what the Google definition says. You’re going to experience pain; it’s just a question of how much. Your focus really should be on placed on building on a foundation to avoid the risks in the first place and resource data can be especially useful to that end.

Second, your critical resources have a disproportionate effect on strategy execution. You know who they are. What roles, what skills, what people. We’ll cover this in more detail. There’s a really good test that’s been happening for the past 18 months to identify this and we’ll get to that in a second as well. From a risk management perspective, this means those key resources, those roles, those skills, those positions may also represent a greater risk to the portfolio. Think about factors like sourcing complexity, upskilling complexity, attrition rates, and other factors especially in today’s market. And getting back to the test we’ve been in for the past 18 months that just doesn’t want to end, just think about how Covid (and I know everyone goes to this topic), but for resource management is anything more applicable? Think about those who have been away. We won’t get into the gory details; obviously it’s affected everybody. But those who have been away from the office for whatever reason due to Covid, there are certainly some resources that have had a disproportionate effect on our ability to actually deliver things. Again, if anything has proven how important resource management data is on predicting success on strategy and delivery and project execution, certainly resource management is key to that and Covid has illuminated that. That’s been a phenomenal test to identify that disproportionate effect of certain resources on delivery.

Third, most current mitigation methods are cost heavy. The impact probability calculations that we see all the time are pretty common, but they’re typically devoid of any resource management related costs. The financial impact of resource management base risk can be substantially higher. Predictive analytics using resource management data can and should make use of these data. Again, think of Covid. Think of those folks you had to replace, not on the gory side but just those who decided I don’t want to be in an office any longer. I no longer want to dedicate time to this profession. Whatever it is, replacing those critical resources is absolutely painful. We’ve experienced this and I know anyone on this call has likely experienced it as well. It also brings to the fore, my colleague, Donna Fitzgerald, has done a number of other presentations that touch on some other aspects of this and a lot of people think of these things as more touchy feely, they’re not part of the execution process, but these are absolutely critical items that have real painful consequences if we don’t pay attention to them. These are just a handful of those items why you should care.

So, how do organizations view and manage risks today? Clearly the overwhelming majority are not using the predictive potential of their resource management data. We generally see the following. First, risks are most commonly tracked and managed by those tasked with executing the projects or the audits or the programs, whatever you’re delivering. We know if the project manager or the person responsible for executing the actual project managing it, then your risk management really is 100 percent reactive. There’s no proactive radar type system we’re talking about here today. It’s not their fault. It’s not the fault of the PMs or those folks managing the projects, it’s not in their purview, it’s not their job. Risk avoidance and implementing an early warning system is done at a higher level and ideally involves greater organizational collaboration.

Second (and this is moving off to the right here to the little shark image), we saw this image in the prior slide, but I’m repeating it here because it really is important. I also just really like the picture. But what it brings to mind is how many key decision makers and participants responsible for more proactive risk avoidance efforts are just disconnected from the project execution process itself.

Third, most measurements of risk reflect probability times impact, which is usually an exercise in futility. I’m sure I’ll get plenty of push-back on this but why use the probability of occurrence for something that has already happened or is going to happen? And the impact is rarely linear, especially when managing a project portfolio that may consist of products, hundreds of projects, etc. and certainly many hundreds or thousands of resources. Resource management data offers large volumes of well-formed quantitative data on which we can better predict and avoid risk.

Why does resource management data offer so much value for risk management and predictive analytics? Beyond the fact that you have a lot of it, here are a handful of reasons that we think are really important.

First, resource management is structural. It’s systemic. It’s the foundation on which you build. If it’s shaky or crumbling, the structures built on top will fail. It limits basically what you can build on top of it. The stronger the foundation, the bigger, the more complex structures that can be built on top of it. In effect, more projects, more complex projects and delivering outside returns on strategy. So this is one of the key reasons why resource management data is so useful for predictive analytics. You can also think of resource management data as potential energy and we’ll talk about that more as we get to the specifics. That tell us what can actually be converted into more kinetic action.

Second, this one is pretty interesting. Resource management data when used for predictive analytics offers an outside benefit of organizational change and improved alignment. Using resource management data requires a top down view of the portfolio requirement, integration of teams and organizational groups who mutually benefit from it. Often times these are groups that are not talking and that’s another one of the issues behind it. Many of the emerging methods leverage these philosophies. If you look into emerging techniques that leverage entrepreneurial thinking, it’s the same thing there. And then smaller entrepreneurial ventures do this intrinsically because all these functions are typically interconnected or intertwined. With larger businesses, they’re often split out and scaled and that causes additional challenges and this approach can certainly help unwind some of that.

And third, and this image we’ve used for years to preach about the benefits and importance of resource management for strategy execution. What’s the point of it? Without resources nothing gets done. You’ll notice the last funnel there and right above it is where we select projects, project ranking and sorting, project prioritization, the funding for the project which I can borrow, I can raise money, I can steal from elsewhere. But ultimately nothing gets done without the people. All the works flows through them and regardless of your methodology, waterfall, Scrum, agiles, safe, all of the above, mixed mode, multi-modal. Who knows what’s next. Whatever the next method is regardless of the delivery method, resource management is that funnel through which all the work has to go. So without the resource assignments nothing happens. It’s important for predictive analytics.

So moving on, the actual implementation of these things. As we get to the implementation, you should think about resource management predictive analytics belonging to two distinct classes, the static class and the dynamic. Of course there are also opportunities to merge these two together. You could do some really clever stuff, but to keep things clear for today’s presentation we’ll really focus on static versus dynamic. Hang on. We’re going to talk about examples so we’re just not going to stay high level and jump to the demos. We’ll talk specific inputs and effective routines to gauge and manage risk.

On the static data, this really represents (and I’ve said it a few times and I’ll keep saying it) the potential energy of your organization. This is akin to your favorite sports team. For me, I’m a huge Cleveland Browns fan. If you don’t know who the Cleveland Browns are, it’s an American football team. They’ve been terrible for a really long time. They were good in the 60s, but it’s been a number of years since. So for many, many years, they really stunk. They were absolutely terrible and this was mostly predictable before each season. The team has a variety of positions and each position’s quality and depth can be qualitatively and quantitatively graded. From these data along with other historical factors and some external factors, which we can quantify mostly, it’s possible to form a prediction on the number of wins and the general level of success that the team can have. This is similar to our aim of using static resource management data to gauge the potential energy of our resource portfolio. So analytics around static resource management data is where most organizations don’t even look. They’re not even looking. This one is confounding. We’re not sure why. It could be because organizational software silos and a lack of communications between, for example, between HR or workforce planning and the delivery or execution related functions. That’s pretty common. We often don’t see those groups talking. Maybe it’s different in your business which would be great for this type of work. Static data are so valuable because they can tell us before we apply our tools, our resources, to solve problems. Do we have the right tools? Are our tools of high or low quality? What is the likelihood that our tools break? Do we have intragroup distributions that aren’t beneficial? And again, the potential energy of your resource portfolio lies here. So we’ll spend some time on static and also how we can use static measures to predict using a range of tools, machine learning, statistical stuff, and some other tools. How many projects can we get done next year of different types? What is the intrinsic risk to various headcount types that we’re using, skills or roles? How many stories can we complete? Epics? Whatever you’re delivering, whatever you’re measuring, there’s opportunities to predict what can actually be achieved and take action accordingly.

And then we have dynamics. This is where most folks want to focus. It’s what they’re used to looking at, you know, dynamic stuff. This is like the game day. This is “What’s the weather? What’s the field look like? How are people running? How did we perform the last play?” These are things that many of you are probably measuring to some extent, maybe just not putting it all together to gain insight into potential risk avoidance strategies. But asking common questions, which we’ll get to in a moment in more detail, but when assigning resources to work are we planning more than we can deliver? That’s your typical supply/demand analytic. Do we have half years around delivery? What if looking year after year in terms of effort and adjusted scale, are we planning above or below what we actually delivered historically? Are we replacing generic resources with named? These are common things people look at but one that’s looked at not as heavily (we’ll talk more about it) is impulses. Are there huge jumps month to month that can stress the team to where it can’t deliver to the point where there’s longer term repercussions? Do we have resources split among high volumes of projects or programs or tasks? Is contact switching a major problem across your portfolio? So these are things we would consider as part of dynamic.

So moving on, what we’ll do next is jump now between the slides and some live demos. So we’ll take a look at capturing some of this information in our product, Tempus, and then some of the possible outputs achievable using our predictive analytics. For that we have some dashboards and various controls or cards built out to give us a measurement for these. Before we jump into it, the key factors that we’ve bought into our model to determine the risk of our static data to predict things around resources to pay extra attention to because of sourcing complexity, upscaling challenges, historical attrition rates. Are these factors here? And one of the most important that all of you should be thinking about measuring and managing, and you may have the data today, odds are you don’t are skills data. Our product makes it very simple to capture this either as a manager through integration or by allowing end users to input their own skills data. Simply measuring skills is absolutely critical so the process of listing out enumerating what are the skills that we capture and use for forecasting, planning, management, analytics, etc. is important. Once we’ve captured that, beyond just the capture is understanding the distribution disparity. If we’re ranking people on a scale of one to five, are they tightly clustered on the ones or are they tightly clustered on the fives? Do they have a tight cluster of one and two and then a handful of fives around a resource that is absolutely critical to delivery? So that intragroup distribution becomes really important in looking at the overall risk and potential energy of your underlying resource portfolio.

Another important factor that some organizations look at more heavily than others, some simply set aside, are pay and wage information. So obviously clustering of highly paid versus low paid and vice versa can yield additional information about the riskiness of your portfolio at rest and even during execution and this can also be used in place of skills data in the near term and that may be in some cases not as tightly correlated as you believe, but it can work. So for example, if we are in the process of building out our skills inventory while that’s coming along, because there can be debate around what level of granularity, what skills to track, do we have it already and for big businesses it takes time. The pay and wage information might be much more easily attained and used for these initial analytical purposes and there’s other things that can be uncovered there beyond risk management.

Another factor to consider is sourcing complexity. We’ll start to get into a few items here where there should be more interconnection between workforce planning, HR, and these delivery groups whether it’s product or IT or audit or other groups in the business. Even if there is, these data may not be there. It’s surprising how much is not captured. But sourcing complexity is absolutely especially now. The markets have gotten so tight for skill sourcing especially for highly skilled. What used to be maybe a local hire has become a regional, extra regional, national or, in some cases, a global hiring challenge and it’s gotten that way for a huge volume of businesses. As a result, you’ve driven up the complexity of sourcing a number of key roles, so this is important.

Also understanding your up-skilling complexity. So knowing, either by groups, roles, positions, do you have a strategy for upscaling and also if you have it any level of quantitative measure to move across those ranges. It doesn’t have to be perfect but any level of detail here can be useful in forming better risk metrics and predictive analytics. Clearly these are not linear jumps so if we grade resources on a scale of one to five, going from one to two is a value, going from two to three, etc. and they’re certainly not the same so a four to five may be exponentially higher than the higher jump. So these are things to measure and consider.

Also looking at historical attrition rate. Covid has certainly thrown a wrench into that calculation and adjusted it pretty significantly for a lot of the businesses that we interact with including our own, but certainly that is another factor to take into account.

So with that being said, let’s exit the slides here and we’ll jump to an instance in Tempus that I may have to refresh. But what I wanted to show quickly was where this information is captured and how we set these things up in Tempus very quickly especially on the skills side. So for those of you who have Tempus and others of you who are considering it, we do capture really two factors on every resource. There are attributes which could be skills and locations and departments, things of that nature and there’s an unlimited number you can create. But we also capture the skills matrix which is completely configurable and it can be sent by user or by manager. So if we head into our resource pool in this case and we’ll pick any particular resource. We’ll jump into Larry here and we see Larry has a list of attributes specifying department, employment basis, positional, etc. These things can be said with Excel, directly through the UI, through API, you name it. But there’s also underneath each resource a skills matrix feature which allows us to grade resources on various continuum, according to various measures, etc. So you’ll see in this case we have service skills, we have solution-related skills, industry experience. This is all completely configurable so it’s very simple once you have some measure of what these attributes and matrices should look like to get those added to the application. Once they’re in they’re available for a broad range of purposes and we’re eventually going to get to the predictive analytic side.

Just showing you quickly these measures can be used for a host of purposes, one of them being the actual execution of projects so, for example, if we drill into any one of these projects here, go to our allocations grid check this out. Whether we’re building a team, doing workforce request workflows, or simple resource replace functionality inside of projects, these skills data and other resource information become usable for these purposes. So beyond just predictive analytics, these are all available for use at time of execution. So skills, attributes, all these other items, very easily added to the application. They also get surfaced in a host of built-in reports as well as customizable reports. This one in particular being just a global summary of skills and efficiencies in those skills across the organization.

Now with these data in the system, as well as our historical information on delivery, various projects and our future forecasts we are able to pull together and a host of other factors like attrition rate, sourcing complexity and others, we are able to pull together much more sophisticated capabilities, such as and I’m going to take us to a dashboard, our data analysis. So taking those items into consideration as well as a host of other factors, we can use our proprietary risk models for both static portfolio data as well as the dynamic data to identify not only the key resources across the organization and use a number of factors for that, but also identify what is the riskiness of any of those particular positions. So on top of a resource being critical to delivery across your project portfolio, there are additional risks involved in things like I mentioned earlier, attrition rate, sourcing complexity as well as intragroup distribution. What I have in this static resource portfolio management risk analysis is at the top my key resources based on the setup, I have a sample dataset, and you’ll see based on data from across the system, I have product engineers, business development, management, manufacturing engineers, business analysts, project manager, developer. We’re able to derive a risk score based on those key factors and these would show up as high, medium and low.

Further below, and I haven’t shown all the controls, we’ll do more of these controls in our dynamic predictive risk analytic, this scatter shows me a few things. What I’m plotting here is distribution disparity, so this tells us how widely distributed or inefficiently distributed the resources are within the group, so the farther we go to the right, the more distributed in an ugly fashion things are which means we are heavily weighted towards having a small number of highly skilled and lots of unskilled and along the Y axis we’re looking at attrition rate and we’re drawing the size of the bubble based on sourcing complexity. So as I get to the upper right here, you can see that my product engineering group is a massive risk to the business. We have an unbalanced pool of resources, we have a high attrition rate because people leave pretty frequently and they’re really difficult to get access to. So combining a range of factors that are in our resource pool, there’s very little we have to do to now gauge, that in this case again, product engineers and it looks like business analysts are both going to be ranked high on our risk score. So we haven’t done anything really besides capturing our resources and building out our skills inventory giving the product a few additional pieces of information. What we can take this information forward with is more predictive analytics around delivery capabilities given these risk levels and given historical delivery capabilities. And that’s where below we’ve added two additional predictive analytics based on our static data to help us forecast. Given these risk levels, historical attrition rates, and these other factors, we can use those to now predict going forward both on a cumulative basis as well as monthly our project execution forecast information. So you can see here we can go from executing two projects in January all the way out to around 70 projects by the end of the year in total. You can see the variability on delivery here along with the bands. We have our forecast as well as upper and lower boundary information all using this proprietary risk model and predictive analytic. So the value of using your static resource data is massive and this should allow for better communication and collaboration between those folks in charge of helping to source these resources as well as those who are tasked with managing the portfolio and delivery of those projects, hence the need to break down certain organizational boundaries, which I think are starting to come down in some cases given the reaction to Covid but also just because of methodology and process, driving that boundary out can substantially help communications and of course leveraging these predictive analytics helps better form that discussion.

So moving back to our slides, so that’s on the static side of things so moving forward we now have our dynamic resource predictive analytics. The fact is that we include in our model our outline below (we’ll drill into these things in more detail as we move forward) but there’s a number of factors that play a role in creating a predictive analytic around the dynamic and the ones we consider are listed here, so first is concurrent assignment data. So research out of a range of sources, academic, unacademic, show us that this contact switching challenge is real. It’s a major problem and any of us who have had to move across a large number of projects at any given point in time know that it has an effect on productivity. So concurrent assignment data or concurrent project allocations is something that we factor into our model. We can very easily capture that out of Tempus so you know how to do forecasting in Tempus so we’ll jump into it and take a look. Then we have that single project forecasting as well as bulk project forecasting that lets us very easily build those things out.

We also have the factor of unallocated assignment data so many forecasting processes start with allocating to placeholder resources. We don’t know the name of the person who will be on the project, but we do know it is a developer, network engineer, product manager, verification engineer, design engineer, whatever it may be. We know the quantities we need and these may be passed to resource managers or others for review and approval or simply updated as more information becomes available. But eventually the generic resource names must be replaced with named. Otherwise there’s a gap in delivery or at least a counting of that gap. So this is another important factor that we can look at. The various levels of impact for near term, midterm and long term. Longer term we sort of expect this to go to the mean of 100 percent where everything is unallocated to a named resource, not only to generic but in near term we expect that to flatten out to zero so that everything is allocated to named resources or considered canceled work.

We also have the fundamental supply/demand portion of this which is, if you’re familiar with Tempus, the RAR2 type reporting. What are we planning versus what is our capacity absent other factors? So at a very high level can we deliver, even on our wish list, what we’re planning based on what our capacity looks like.

We then have historical delivery versus projected delivery. So this one is very typical. This one is amazing how many clients and customers don’t look at these data, especially if they’re capturing actuals directly in an application like Tempus or others. And that is what did we deliver historically versus what are we attempting to deliver going forward. Now this of course can be adjusted. You can form metrics based on resources, resource count, as simple as resource counts. Historical delivery versus projected. It can be as simple as just year over year delivery. So if we’re planning to deliver, all else being equal, we’re planning to deliver is 10,000 worth of effort in a month and year over year adjusting for headcount changes we deliver 8,000, how do you expect to deliver the additional 2,000? It seems fairly simple but it’s still something that many don’t address but it is a key input into driving predictive analytics on the delivery side.

We also have something that’s maybe a bit trickier to measure, but it’s all in Tempus. This is flow rate. So think of it like a traffic pattern. If traffic is so jammed you’re bumper to bumper, slowdowns have a much bigger effect downstream. Someone at the front brakes and that could yield a huge bottleneck many cars back. Whereas if there’s enough space in between those cars, projects move smoothly, you can move to and from the highway more easily and things are accomplished in a timelier fashion when we have a clean flow rate. When we have impulses or adjustments where things are so highly and tightly allocated, we run into delivery challenges and that’s fairly well formed.

We also have, which I mentioned a minute ago, measures of impulse so rate of change month over month is another one that is important to look at. Much like running pressure through pipes. If there’s a constant flow rate and then suddenly a massive escalation in pressure, that can expose vulnerabilities in a system just like it can in your product delivery. So just understanding month over month rate of change, and there’s even more sophisticated measures to look at that, but a simplistic view of that would be just month over month change. What effect does that have on the resource pool and especially on project delivery? This also ties in if you’re interested in some of those softer items of burnout and other resource related issues. The softer items have a pretty outsize effect now given the power that workers have given the state of the market.

Then we have others like extended periods of over-allocation. How long is that sustainable? Extended periods of contact switching.

And then of course we have things like preference bias so within the pool itself, what does preference look like for specific resources? How does it tie into that intragroup distribution? We can glean additional details from that information.

So we use these details to form a machinery model and that model then generates essentially an expectation on control. So we have a prediction on when our business is running, when we’re operating efficiently, what is our expected level of control look like? Sort of what is the mean expected control level? And then we have upper boundaries and lower boundaries which are designed to tell us when are we deviating from control. Basically when are we getting out of control. We use that model to then determine where do we have upcoming risks.

And if I take you out of this screen here and we jump into a different dashboard, so let’s go into dynamic analysis. What we’re doing here is using a model to form a predictive analytic around control and we’re using the statistical process control type chart to help identify over time where do we get out of bounds. Where are things out of whack? And we generate this consolidated risk score using this process control diagram at the very top and it consolidates a risk score that we generated for three distinct skills. Now for the sake of the demo I’ve obfuscated some of the values. You’ll see them as Skill A, Skill B, and Skill C. So again we’re looking at data through the lens of resources but in this case dynamic. How they’re allocated; how they’re deployed. And you can see from the sub-measures, which actually roll up into the consolidated risk score, that things generally here are fairly close to control, but we go out of bounds here in April where things really tick up. And there’s a substantial deviation and nearly the upper limit threshold of going completely out of control. So we can drill into this here to look at more detail and you can see, for example, just how close we are to the upper limits here. So very quickly identify I’ve got something major coming up in April timeframe. Now the way we’re looking at this now as part of the demo is showing an entire year, but you could look at this going out as many years as you have data and so the assumption here is that you might be looking at this at the end of 2020 and early 2021. Hence, this becomes part of that radar to identify that in April we’re going to deviate substantially out of control nearly to our upper limit based on something that’s happening in one of these skill groupings.

Now down below you’ll see the three constituent analytics which use the same model. They’re really rolling up into this one and I can see by skill where we begin to deviate. There is a deviation here for Skill A. Skill B nothing. This one’s really fairly in control relatively speaking, but Skill C is really where we have this massive spike. So there’s a huge jump in Skill C. So this gives me a first indication that I need to look at Skill C, and this might be let’s say at the end of 2020, four months away there’s something cataclysmic that’s going to happen.

Now I made some adjustments in the underlying data for the sake of the demonstration but underneath this are the supporting controls. So inside of our dashboard I have the constituent members. I didn’t do this in the static report. In retrospect I should have. In the next demo I will. But you’ll see here in this case I’ve added a slicer at the top so I can look at Skill A, Skill B, Skill C and quickly highlight those or just filter them out of reports. Now for the sake of this discussion, I’ll just highlight Skill C and we can take a look here of these various inputs, most of which pulled from that slide I showed a moment ago, but the first and probably most dramatic, mostly because it corrects in May, is the rate of change month over month. And you’ll see here for most cases there’s a small incremental change month to month by percentage. I just have these as the decimal value so five percent, four percent, three percent, seven percent, negative percentages. But in one month, April in particular, there’s a giant spike in additional work so for Skill C there’s this huge uptick. Now immediately in the month after it there’s a huge downtick to correct and take it back essentially to the mean or near the mean and you’ll see it jumps 34 percent and then there’s this huge drop-off as it corrects. So this is one of the key inputs here, rate of change month over month and we’re just pulling this data out of Tempus into our reporting environment to identify this rate of change.

We also notice under “concurrency,” and what we’ve done inside this particular control is look only for cases where on average, and we’re not using pure average, we’re using some other math, to look for levels of concurrency within particular skills. It’s not as simple as average because you can have one and then six and you get the idea. It gets a bit more granular and little more complex, but if we drill into this, and we’ve added a slicer here to say to look only at things greater than or equal to four. So this now shows me those concurrency levels by month by my various skills. And again Skill C in the month of April, we jump to level nine of concurrency across projects, which is very clearly a major problem. If needed, we can drill in to see the specifics of the people who are concurrently allocated across projects or tasks, whatever it might be. And then you’ll see a drop from nine, eight, seven, down to four and then we’re off the chart entirely as soon as September. Now I can use this slicer to get it even more finely tuned here in the report and to look at more detail. Again, there’s our Skill C as our major culprit. This measure of concurrency is a critical input into our overall predictive analytic.

We also have our conventional supply/demand. We’ll jump to that in a moment inside of Tempus. Our legacy reporting as well. But again you can see where we start to jump beyond capacity levels. So 6400 hours planned versus 4900 hours capacity.

We’ve also drawn a component or chart that shows us our unallocated percentages. Now farther out, again you notice this returns basically to 100 percent where everything in the future is basically planned to generics. It has not been allocated to named. But in the near term, let’s look at Skill C. So Skill C also has a slight spike higher than we would expect in April where we still have around three quarters of our allocated work assigned to these generic resources. This has not moved against or to named individuals. So another input into our predictive analytic.

And below that I’ve exposed two more controls, one of them being flow rate (I mentioned that earlier). This is really kind of our traffic pattern so we see this over time. These are all fairly well managed until the end of the year when everything just kind of goes sideways a bit, but again it’s April timeframe when we have these big spikes in our allocation levels and hence our flow pattern is interrupted. So very clearly we’re painting a picture that’s very simple to understand, but ideally with these predictive analytics it should make it that way. It should be very clear where the issue exists once we’ve catalogued the skills and we have our forecast data and execution data input to Tempus.

And then lastly this is more of that happier situation where what did we do last year; what did we do this year. Instead of just drawing the delta, which we could, it may be simpler to see, I wanted to draw a side by side or show a side by side prior year actuals versus the current year planned. So again in this case we’re looking to deliver 6400 hours of work. Last year we delivered slightly over 5,000. So again a misalignment between capability and expectation. So we use these factors to then drive these predictive analytics, along with a few other inputs, and then of course a summary level as well. These are just another example of an implemented dynamically focused resource portfolio management risk or predictive risk analytic.

For those of you who are unfamiliar, these data are all simple and easy to track inside of Tempus. They could be captured on individual projects, so if you look at any individual project here it’s exceptionally simple to capture forecasts either at the project level or at the task level either by hours, by cost, FTE, FTE percentage. There’s also person days and we support entry at the project level or at the task level if you wish. We also offer two separate datasets, an allocation dataset and demand which might be used at different points in time. And then there’s flags in the application to tell the reports which dataset is the active dataset. Beyond the forecast we can also capture the actuals as well, which can be input manually, uploaded from Excel or captured in the integrated timesheets that are part of Tempus. You have that ability to look at what did we deliver and as the data in the system grow be able to better predict where are we headed based on these data.

Now in addition to capturing single project, the product also includes bulk forecasting capabilities. This is also quite unique in a differentiator where we can forecast against multiple projects simultaneously, multiple resources simultaneously inside of one screen and also enable concurrent editing, real time heat mapping, etc. All these data feed all the predictive analytics you saw a moment ago. They also feed a range of built-in report writers, supply/demand analytics, of course other report writers, and even things like what-if analysis. They can also be tied into things like supply/demand analysis. This is another resource allocation reporting screen that actually ties into our what-if analysis engine. So on top of our reports we can also jump directly to our what-if analysis capabilities and of course look at in real time the what-if effect of making changes.

So an extensive array of techniques to capture these data and adjust this information live or as part of a what-if analysis and of course they all feed our variety of advanced reporting capabilities as well. And this is just a quick overview of the range of capabilities that I’ve assembled for today’s presentation as well as several others. It will let us track things ranging from strategies execution, financials, revenue information, cost information, of course the financials which I already mentioned, as well as our predictive analytics around resourcing. So it’s a wide array of capabilities that are built into the product.

So jumping back as we wrap things up, we’re going to be at a number of events assuming they all happen (knock on wood). In September we will be at the Society of Human Resource Management event in Las Vegas. Come visit us there. October we will be at the Project Challenge show in London as well as the Gartner IT Symposium in Orlando. And then in November we’ll be at the PMO Conference in London and the Gartner Reimagine HR Conference in San Diego. We’d love to do more but we’ll see what things look like. And with that, thank you very much for attending. I hope this was beneficial. First time delivering this specific presentation. Your feedback will be quite useful in improving the delivery and modifying how much we display in these demos. To continue the conversation, to do a free demonstration or get a free demonstration or get a free trial of this system with your data, you can reach out to me, you can reach out to Greg who started the presentation. Email is always preferred (spales@prosymmetry.com) or you can jump to our website https://prosymmetry.com for more information or watch videos or read more content or of course submit a request for a demo or submit requests for a trial. So with that, Greg, I’ll wrap things up and hand it back over to you.
_____________________________________________________________________________________
Excellent! Thank you so much, Sean. Got a couple of questions that came through so I’m going to address some of these here.

Q: What is the average size company that has adopted this tool?
A: I would say it really varies. So if you think about it, you could be a company where you have a department that is just managing one particular group. Generally, if you think about how many people you have on a team that you’re assigning to work. If you’ve got one, two, three, or four you’re probably you’re not going to need to leverage this but we’ve seen groups where they’ve got 25 people that they’re managing and they want to get a better handle on how they’re allocating those resources to the projects all the way up to some that they are a large enterprise scale, tens of thousands of people working at a semi-conductor company. So I would say it really depends on those kinds of projects, how complex they are, and when I say complex it could be you’ve got a lot of people working on different projects, so I see it from small all the way up to large.

Q: What about contingency resources?
A: So probably there could be another line saying, hey here’s the issues you’ve got with your current, but also maybe layer in what possible contingency resources you can hire. That is absolutely something to consider. The model that we’re currently using doesn’t place a lot of weight on that particular topic because it assumes that your critical resources are defined and we make some assumptions on how we identify what is considered critical. So there are elements to this whole model that do have to be tweaked for the client. There’s no one-size-fits-all machine learning model. There are generic characteristics that could be tremendously helpful, but then there are specifics to each business that do require additional analysis. So we recognize there are going to be certain businesses where things like this and other factors play an outside role versus our core model. That’s a source of service we do for some clients that engage us to do and deploy this type of solution. No question about it. It would have to be quantified and factored into the machine learning where the other dynamic programming might be part of the mode.

Q: How is attrition calculated? Is it based on when resources have been added and removed from Tempus or manually inputted?
A: There’s a few options. We’ve gone and done a few things where we’ve used term date, for example, so often our clients will capture a start date and a term date. We’ve also recently introduced the ability to capture negatives inside of certain forecasts. So we have some clients that are starting to input attrition as essentially a project where they can build out what does the attrition rate look like for roles or skills literally on a monthly basis. So some folks that are in professional services, others that are in contract research already have well-formed information on this and need a place to put it. So that’s one option in Tempus; others we’re leveraging pretty heavily on date information across the board. So literally it’s attributes against resources. Again with that negatives capability we can get much more sophisticated at how we look at that if you have that already.

Greg:
Fantastic! So that looks like the final question and really again everybody, thank you so much for attending. Feel free to send Sean an email or send me an email. You’ll get a follow-up with a copy of the presentation. We’ll also, once the recording is finished and set up, send you that as well. Feel free to go to our website. Hope everybody has a great day and a great afternoon.

Ready to get started?