Case Study

Webinar - how to leverage the cloud to advance your scientific initiatives

Turbot Guardrails automates regulatory and compliance controls for HCLS customers

Turbot Team
3 min. read - Jul 08, 2020
Turbot Guardrails automates regulatory and compliance controls for HCLS customers

Disclaimer: Automated Transcript

all right welcome and thank you all for joining we are excited to work with our partners in AWS and turbot today we have lots of good content so let's dispense with introductions my name is Michael Reiner I was president of our state solutions my background and life sciences includes experience in manufacturing regulatory clinical areas and where I've spent most of my career and most of rch in the early discovery and development areas David how about you hi I'm Timothy

I'm CTO and I lead the services organization for turbot I've also been an IT for over 25 years and heard a great deal of experience leading highly regulatory companies through cloud transformation efforts both within those companies and with serve on helping them along the way and I'm your boss trim sorry about that I'm Andrew Bostrom of partner solutions architect here at a Tobias focused on healthcare and life sciences really helping those organizations navigate the cloud journey compliance architecting for security and so on and a background in healthcare IT so pleasure talking with you alright so if we take a look at this first slide as you'll see this is a much different landscape than we had several years ago if we look at the big trends in industry they're all forcing us to reevaluate that we look at our current systems our standards any of our workflows these things are all disruptive but they're exciting as well especially if we look at them as opportunities they are all agents of change for lack of a better term that can truly transform business and make a real impact so starting in the upper left we have medical innovations targeted drug solutions gene based therapies and other paid patient centered approaches moving over to the right other than people data is the most important asset to our company right we have lots and lots of data it's coming from many sources however so much more so the leadership of our companies now includes things like the chief data officer and that's important these transformative technologies are opening up new doors so what transformative technologies are we talking about we're talking about artificial intelligence GPU computing and our primary topic of the data we're going to talk about here which is the cloud well not only our new business is emerging because of what I just meant to but the services need these are changing as well all of these things together and individually are threatening a traditional Pharma culture however and we can see them as opportunities opportunities to innovate opportunity to get better results much faster and much more accurately than ever before to advanced life sciences so today what are we going to talk about we're going to cover trends we're gonna cover possible solutions we're gonna cover customer successes and finally we'll leave you with some takeaways you might apply to your work more specifically andrew is going to talk about the depth and breadth of AWS and life sciences Bob will discuss turbot the cloud configurations and controls and enables and finally I'll talk about transforming efficiencies from one environment to the other and finally we'll leave you with some takeaways for accelerating research thanks man drew you want to kick it off yeah absolutely so I wanted to give a brief background of where eight of us is today and how that can help the life sciences industry so AWS is proven and established across millions of customers in pretty much every industry in geography that ranges from startups to enterprises everything in between and they're really running every imaginable workload on eight of us and then life sciences advanced technology has really become the norm in pursuit of data-driven decision-making acceleration of precision medicine and other things that require that high-tech edge so it's really time to merge these benefits and allow both organizations to grow so we have some some key trends that we've seen the first being a real explosion of data the the amount of data generated analyzed and collected is growing astronomically when when it scales by an order of magnitude in approximately ten years organizations really have to change the way they analyze and visually visualize data in a cost-efficient manner second pharmaceutical companies are really facing increased pressure to get to market quickly the the boost to market share associated with being first to market helps recoup that ever increasing drug discovery and development cost.

so traditional IT models and internal systems can lack some agility to support that pace we see organizations needing agile infrastructures and being able to leverage ever improving technologies and with those new technologies and new therapeutics coming out potentially biopharmaceuticals aimed at more specific conditions gathering the evidence to support that efficacy and really get reimbursement for those technologies is a growing challenge so capturing in market real world data requires new approaches and the explosion of data available via IOT sensors natural language processing and so on means that these approaches must be scalable to meet big data demands last up is the fact that we're seeing research and development happen across the globe just collaboration between numerous countries numerous stakeholders and that distributed research means that data sources and data ingestion face some challenges security being one of them so being able to leverage and network of worldwide data centers that provides redundant power and secure connectivity is really one critical answer to that so some of the ATF's

benefits that we see are for four key areas the first being that we we can help you decrease the time to get real business value from your research data and we also provide pay-as-you-go pricing so instead of expensive over provisioning and then having your infrastructure sit idle the cloud allows you to create a high performance cluster for the one hour a month that you actually need it there's also room in artificial intelligence and machine learning we're seeing a lot with drug discovery and manufacturing where organizations will predict success rates based on biological functions model absorption and safety scenarios and also in clinical trial research where organizations are able to more easily identify candidates based on potentially unstructured text in their medical record or genetic information and in OBS capacity is also increasing at an unparalleled rate so we're adding the compute capacity of a fortune 500 company pretty much every day so that increase in capacity drives increasing automation around processes and practices so the companies that rely on 80s don't go down as often the companies can also spin up resources really as they need them deploying hundreds or even thousands of servers in minutes sometimes that scale reaches up to tens of thousands or even millions of virtual servers or containers that are brought online just as they're needed and then taken back offline once the task is done Adobe s is also custom-built infrastructure for the cloud so all elements are designed to inter communicate well and present the smallest attack service possible in addition to the software design the physical security controls that we have in our data center are designed to be the most stringent in the world so that has led to be late late Adobe us to be fully trusted by governments military organizations global banks healthcare institutions and other high sensitivity regulated industries so finally our security team is monitoring the infrastructure all day every day and is connected with all the major security watchdog groups and vendors to ensure that potential threats are identified immediately they're also doing that at a massive scale which is something that really sets a topi us apart so we can look across a million active customer accounts running every conceivable workload and we can see issues that may only occur once in a billion operations and when we do remedy those issues it's for the entire platform so that that level of scale and response simply isn't achievable on your own the some of the specific compliance and security controls on it abuse include protections for life sciences organizations that are looking at the sort of pharmaceuticals biotherapeutics med device in met application space so it OBS allows IT teams to set up that fine-grained access control audit ability and automated guardrails to create these environments and create regulated workloads following GXP so the the consistent in consistent and controllable infrastructure means that you can create templates that allow you to use your infrastructure throughout your organization with a high degree of consistency.

This is something like infrastructure is code where you know what template you're running it OBS also gives you deep control over who can affect elements of your infrastructure and when where and how they do it you're also able to repeatedly test your environment when you build and validate your applications on database you're using infrastructure software products instead of physical hardware so that means you can repeatedly test you to monitor your environment for compliance rather than manually doing point-in-time activities where you might be doing similar what you might be doing on premises today AWS also allows for highly repeatable testing so you can test infrastructure via API and do that much more frequently than you would ever be able to do when you manage your own infrastructure along with that is automated traceability so when you're deploying it OBS to the software infrastructure you also use tools to automatically log a wide range of activities in your environment including things like how your infrastructure is deployed and how the information is accessed and configured that gives you increased traceability making it easier to support audit requests so specific tools like Cloud trail allow you to allow you to have a high degree of traceability you can see really who interacted with your environment when and what they did lastly it abuse has excellent data privacy and encryption tools so AWS does not use or access customer content for any purpose other than is legally required for maintaining database services or providing them to customers and it obeys customers also have the ability to encrypt their data in transit and at rest with fine-grained access controls and really instant access to those audit trails for forensic analysis we also have a concept called the shared responsibility model where a Tobias delivers the security of the cloud and expert guidelines and resources to really help customers with their compliant application development but then the customer is responsible for the security of the cloud meaning they develop validate and secure their applications based on due diligence and expert consultation however we also have many resources available to support customers including GXP white paper high trust certification high trust white paper and reference architecture and some other other tools like that and then for those that also want in them the helping hand there's tons of partner solutions out there like turbot.

So David's going to talk a little bit about that great thank you so much yeah and one of the things I wanted to point out and a great intro and I love nerdy and a little bit about all the capabilities that Amazon brings to life sciences in this space it's it's incredible feature set that is available to you and we're just talking about there is the disappearing under cloud that Amazon handles for you they do a tremendous job of automating the suffer develop life cycle the the code that they deploy and have great tooling in place to monitor what's going on with the web environment there's a similar set of capabilities that as a customer of AWS the security team in the tag that is your responsibility and so turbot is a software tool that helps you through all those best practices of configuring the amazon capabilities that exists out there to provide security for your environment as well as accelerate your journey in this space as you see on the slide what's in front of you were an advanced technology partner with Amazon we've been working in this space as partners with Amazon for five years I don't want to the only software companies that have both the security competency and the Life Sciences competency to talk about that so let's go forward the which are much trusted because of this experience that we've had turbot is trusted by a large number of Life Sciences organizations both suitable space of Life Sciences space these are some of our public use cases that have been have done presentations that reinvent or another chose for us and the reasons that they work with us are for a number of different use cases first one is to accelerate client-side adoption so if you're moving into the cloud this is new you want to you know want to put your best foot forward you want to make sure that you're aligned to all the best practices that Amazon publishes and and other organizations public around the secured me the compliance the control of your environment talk about how do you do this so we can accelerate cloud adoption by giving you I of the Box a set of animations of guardrails that you can apply within your environment and that really you know drives that strategy forward makes you be able to you know start on second base instead of instead of being up at the plate being the way that we do this one of the key mechanisms that we do this and I'll talk about this in more detail is helping isolate workloads and automate across those those isolated workloads within your environments ensuring compliance and security especially for regulated industries you know we talked you know earlier about how this is used by governments you know by financial sector and of course by the life sciences sector.

Right and so we have unique both regulatory requirements and internal requirements by companies operating in this space that we're special needs in the in the design and implementation of their cloud strategy to ensure that they're getting the most out of it finally the other thing that we that we find with life sciences companies is that life sciences companies like to merge and join with each other it's a lot of mergers and acquisitions but also a lot of joint ventures where industry is working together with you know with higher education for example to do research projects where data needs to come together for a short period of time and have joint ownership and the rules by which you manage that joint ownership are important the ability to specify in advance and have software tooling checked and say hey look all parties data is being protected in the same way by the same agreed to rules across all those parties so those are some of the some of the workloads that or some of the use cases where turbot shines and the life sciences industry in conjunction with with our partners like our CH the cloud presents unique governance challenges to life sciences organizations because of reading because of three things first one is agility so the business whether that's a data scientist or whether that's an application team they want to move in applications in the cloud or bring some data to the client reduce to do some analysis on they're moving to the cloud they're moving to a turbulence of these capabilities because of the agility that that environment brings but that's keep the business has a need the the cloud is providing a capability on demand that helps them you know achieve that goal at the same time expectations in terms of in terms of you know your ability as an organization as an either an IT organization or a private organization to protect your data there to make sure that you're you're holding up to your end of the bargain in terms of security in the cloud is very real and when you make mistakes like that that can mean you know a headline when in the past that hasn't right so those expectations around what you should be doing what you can be doing all the new capabilities that have been you know presented in terms of you know in terms of the capabilities that you have to protect your environment.

Making sure you configure those correctly those are all things that are important for you to know and understand and affect all the time and then finally the control aspect right so in the past we've had processes in place and large enterprise organizations to employee infrastructure because of the way funding was you know occurred you know capital funding occurred and because of the actual physical security of the data center it wasn't possible to do things somewhat out of process today with cloud infrastructure is divine bike the right word we're defining our network that we're defining our software application staff we're defining that you know our infrastructure spec as code and deploying that and that is the best practice and when you are able to do that when you're able to leverage these technologies that puts a huge burden on your ability to be able to govern them and look at the architecture of them expect how you are doing architecture review of an environment where that is dynamically provisioned so these three things really come together and create a unique challenge especially for life sciences that is used to having very rigid processes for review and architectural Kubo Exedra in the space and that's really where you know where terma shines you know its focused so I'm gonna do in the next section is really talk about a couple of those use cases how we see typical organizations moving into the cloud and and going through their cloud adoption and what are some of the pitfalls that they make and then talk about a slightly different approach that we think will accelerate what you're doing so in this first model this is the organization has a need to use the cloud and they haven't decided really on that control on a governance mature only on a governance model for their deployments yet so they're in a way stuck in this bottom left hand corner stuck in viability where they're trying to decide how we should approach this how much freedom should we give to development teams and to days scientists versus how much control should we have one approach that we see is that you know for for organizations that are very bursty verse is that they may go into a model where they create a lot of control and they move a lot of their current processes from on-premise out to the cloud they take a lot of the same queueing a lot of the same central organization ideas and they a lot of the control ideas and they move that and what we see is that we start to have some success in the cloud but they're constrained by the agility has been really constrained by those processes if you always have to ticket into someone to creative you can see and building you know VPC as code is is somewhat constraining I have the customer that we work with that's working on an IOT application in the life sciences space they're collecting a huge amount of data and that application is actually creating and destroying VP C's on it hour by hour basis in order to meet the demands of the different IOT devices that are that are out there in the space so when you when you have this model what tends to occur is that you have a few you know early adopters that are frustrated by the lack of agility and they start doing their own thing so you start to see a few few applications pop up in this Wild West Zone where they've gone off the reservation they ignore it kind of the IT companies model for this they just swipe their own credit card they start doing their own thing eventually what will occur here is that because they're not IT experts because they don't know how to effectively do the security in the cloud piece that we're there we were talking about earlier they will do something wrong and that will that will create risk for the organization and the organization will you can move farther down into the the control scheme and reduce the ability to to go even further which is the next vote the the we can actually get you forward here and then the other model that that we oftentimes see is early on you know life sciences is a huge early adopter of cloud computing because of the size of the data that we have because the workloads because of the intelligence of the data scientists and the application teams working in the life sciences face many of those many of those teams were early adopters in the cloud and they didn't have insight or or ideas from special IT within their organization in terms of which way they should go so they started building things and and just kind of built them they would you know read best practice white papers they do their best but they they essentially have a huge amount of self-service and self-service control and they started working on that once that got big enough either this thing gets big enough or the visibility of those applications got get big enough then the central IT organization comes in and starts to pull away some of the capabilities that they have right and it doesn't matter if it's even you know.

If if the organization's risk profile has been reduced to those out patient teams it doesn't feel good because they had something and now that something was taken away the reality is is that those things that were taking away man can better manage now but the the feeling the what is left with those teams is that they had this hide a really high degree of control and now they've lost all that right so what we want to do is really avoid both of those situations and we've we think that between a combination of the software that the turbo provides and the services that that rch provides we can move you quickly along the sweet spot where you have a really high degree of agility and self-service on the application teams and data sciences teams but your enterprise organizations feel there are a hundred percent of control they know exactly what's known on the environment and they can control they can monitor and they can affect what can be done on what can be done within there with that model we feel like you have the best of both worlds and you mean quickly and this is what accelerates your movement to the cloud a big a huge portion of that ability to move quickly is tied to how you manage accounts within AWS so there's a lot of different models for for managing accounts most organizations when there gets getting started they have one or two pocs that are going on going on so someone will you know work with AWS they'll get an account set up they may connect the networking but it's essentially a single tenant you know stuff built a taboo X account that they're that they're deploying their applications into then another project will come along and ask them hey look we've got something that we like to do as well can we can we share you know that that house with you and in the beginning everything's everything's hunky-dory everyone's getting along.

If that you know maybe one to three different applications all running at the same account but over time as more and more workloads come onboard then the house let's look a little message that's shared environment that shared house went from kind of very neat maybe the original person that design kind of the be the governance within that shared moves on and so then it's left to the team to do that and so things get a little messy someone steps on each other's toes you know something something stops working and so then IT comes in and says hey I know we need to do really create some services around this and so we start building a service model though saying hey look so rather than you guys doing these things yourself why don't we put process in place in order to provide that as a service to you and this is what commonly happens on Krim and what we see is organizations that get into this get into this space and start doing this model tend to start replicating their on-premise capabilities into into the cloud we add turbot feel like there's a there's a better model for this approach and that's really the multi-tenant model building our core infrastructure so that everyone is lifted so that those application teams the data scientists don't have to know how to build networks they don't know how to you know they don't have to know how to travel to do all the best practices they're necessary for the security and compliance of their individual account they have all the core services necessary to them but they each get their own space to play they don't have to worry about tripping over anybody else not to worry if hey if I'm building some lambda functions I do I have to make sure the naming is different from the lambda functions you're building except so there's a huge number of benefits this multi-tenant model brings mainly around you know reducing the blast radius for anything that goes wrong so any individual application and in the environment that is you know you know is compromised or has a bad change that accrues to it it's not gonna affect everybody else there's a logical boundary and put in place between the different applications this also helps with a diverse account limits so as you said to get as you start to grow and you get larger and larger and larger you have certain limits in terms of what you can do within a single AWS account many of those limits can be increased and adjusted over time but this allows you to operate within there's no initial limits without constraint this also helps with segregating user access if you've got different application teams working on their application you don't necessarily want complication a team a fail to see or read or or chain application B's code and then this really helps with the use of platform services as I mentioned before things like you know land things like Fargate those areas where having multiple teams operating in the same space takes up a large amount of coordination by removing the need to coordinate between those teams and giving those teams their separate account you've reduced a lot of overhead for that finally in the life sciences space this gives you an incredible benefit in terms of automatic we've all seen in life sciences where either an internal auditor or external Holyrood comes in and they're looking at system a and that system is shared with another system and they're able through either you know doing log analysis or or by following kind of change that was made within the system they're able to expand the scope of their audit by system sharing each other so having the ability to have separate accounts and create complete isolation between those make sure that your audit files your log files your cloud trail files all of those things that are in this one account that are being audited are all belong to that account and you never have to explain to an honor or oath well that that particular application is not GXP so it didn't go through this change control process making sure that all of those things are separate and then control are really key to your success so what turbot is designed to do is provide on a Meisha an automation platform that you run within your environment that you run inside AWS that actually monitors the security and configuration of your entire cloud and helps you manage those large multi account models so the design platform goals for turbine we're really three key things that we've talked about during this talk the first one is is that the cloud team itself the centralized cloud you know center of excellence or the cloud operations team they have visibility and control over the entire environment so whether you have five indigo F's accounts or you have 500 AWS accounts that central team can see across all of them they understand what's going on and more importantly they can set rules what's acceptable and what's not acceptable for your environment and those rules can be anything from what services that you can use so if it's a GSP application making sure that you're only using eight of your services that have been approved by the organization to use for jxp applications also making sure that unlocking is enabled and immutable it can't be deleted within those accounts so all of those all those types of things that ensure that your environment is configured in a way that makes it audible that makes it secure that makes it compliant and that you have change records across all of that the cloud team gets that capability through implementing turbo the application teams on the other hand whether it's a nominee know whether it's a you know software development team or whether that's a data scientist they get self service they hit the ability to use the native clad tools that they found love with that they want to use every day but they wanted to go out and use those those great capabilities that we heard about from AWS earlier in terms of using them directly they can use the u.s. console they can use the CLI they can use the API they can even use third-party tool it's like you know terraform or an slow or whatever terrain is agnostic to all of that we allow the application teams that have direct access to those cloud services and directly configure them and what we do is we monitor on the side

What they're doing so you know what capabilities have used what security groups have you created what that's three buckets have you created we monitor all of the activity and then compare it against the rule set that the centralized five teams made and then we can make a judgement the real-time bro-bro essentially that makes a judgment as to whether or not that configuration is allowed or not allowed within the environment either crazy an alert or actually fixing it in real time and repairing it in real time because of that the enterprise fundamentally gets best practices put in place so having read all of those great English white papers that exist about how to configure the environment in a secure and safe way bringing in third-party expertise from organizations like NIST and PCI and NCIS that have control objective schemes that can be applied to your environment to ensure best practice configuration all of that's automated by turbot and ensuring that every time you deploy a new account you're getting all of those best practices you know implemented in a safe and secure way fundamentally having this infrastructure in place with a really Sound Cloud operations team and make sure that you can really attack six key areas of compliance within your organization the first ones around Identity and Access Management so turbot provides a full federated at any an Access Management this is making sure that your on-premise federated identity whether that be sam'l or ad is is connected to the cloud and so all of the actions that your users are taking in the cloud are are needed back to those those federated entries that they have the second piece is really around data protection these are the core controls around restricting public access to data making sure that encryption is on on all data and then making sure especially important to life sciences making sure that your backup and data retention are configured correctly within the environment in your architecture make sure that those enterprises best practices are created out of the box so this would be things like you know ensuring that contro you know our

Logging is enabled ensuring that ensuring that the a.m. eyes and the custom models for this organization are embedded so basically building that that automated architecture immediately on account creation so huge accelerator for each and every project that deploys within the environment turbo also has operating system level controls this enables OS hardening and patching making sure that those individual instances that are deployed within the cloud are safe and secure themselves and then we and talkable that we provide operational and cost controls that allow you to ensure that you know you're budgeting you have visibility to budget and cost that you're tagging all of your resources you can send limits on the type of resources that people can use within their account and finally network automation the biggest piece in terms of security then they need to manage within your environment we ensure that we can automate the creation of V pcs in a way that protects the enterprise connects to your on-premise networks and really follows any of us best practices in that space all of this is only capable when you have a great partner like our CH that's using this software to ring with their own best practices and capabilities and and helping you along that journey to to get to that govern environment so I'll turn to our Bank my hands and talk a little bit about about how our CH can help you pull all of this together thank you Dave thank you Andrew so what we what we're really touching on here to this and the technologies are moving everybody to a better place so how is the cloud helping with this better place or as it's known now that this transformation to a better place well why don't we first level said that were experiencing the same challenges then we'll discuss ways that we can move the needle finally I'm going to share some customer examples and specifics of how we got there so let's first talk about the landscape remember when I don't want our world was simpler remember when bio 80 was simpler remember when the business was left to its own devices you had someone for example sitting in a lab they had high end modeling workstation there they're super super computer at their feet they had resources to help them people with you know sysadmin he had Linux or UNIX person before that an application experience and then what happened there as Dave pointed to earlier I T put standards in place for a lot of good reasons drew those people in and things got to be more challenging when they decided to outsource a lot of the support models well the way it used to look when it was simpler doesn't look that way anymore in fact it's getting wider right and this is what the business science and IT look like now the question is why does it look this way well for a variety reasons but primarily industries change so new business models are built and the models were crewed by technologies and new policies were put in place and new standards everyone loves that term right standards so I T has Danner's well there's a reason for that because they for most of the business most of the businesses have a well-defined set of applications processes and workflows and therefore IT can predict they could adjust and they can properly manage the businesses based on these standards well much of the business and science don't these groups have a great number of applications the applications would be commercial off-the-shelf open source sometimes they write their own code and their processes change all the time as do the workflows.

I'm sure you all know the example of you know dr. so-and-so in a lab somewhere finds out about her colleague and another life sciences pharma company or somewhere in academia they have a new processor new application or new tool in place and they want that in production tomorrow because it can immediately impact what they're doing well it doesn't work that well not with the way the IT model is set up and then on top of that now there's this push to go to the cloud for logical reasons some undefined reasons so I think we can we're aware of all the whys the question is what can we do to close the gap how do we get there safely securely while keeping everybody happy how do we embrace all these transformative technologies close the gap and more important enjoy this improved collaboration between business and IT well you make informed choices informed choices based on tested proven practices experiences with technology people and the best practices next slide please so take a look at this from an read it the point here is what was working in the past is not working now especially with cloud adoption I love this quote because it applies to not only our company but any group organization needs to be agile move fast and scale with better results speed is king when it comes to results especially in life sciences unfortunately most big companies are not agile nimble flexible but they can be if they make small fundamental changes so how do you become fast flexible and well obviously it's obviously like Andrew and David are talking about you work with people and partners that have experienced proven results in a very specific area so I'll give an example if you went to your doctor your general practitioner and and and they diagnosed you and told it well you need to go see a specialist you would not insist that they perform that specialized treatment of course you would you go visit someone and a real hands-on practical experience there a very specific area a specialist an expert right it's it's like that TV commercial where they say just okay is not okay no you need to see a specialist so for what it's worth a little background a little context the our stage cloud journey began over 10 years ago we were fortunate enough to be exposed to hands-on work with one of the early life sciences the doctors of the cloud that would be Johnson & Johnson it's all been well documented what they do the point is like Dave and Andrew we have a lot of experience working in the cloud so whether you're just exploring this you're trying to create a plan let's look at ways to optimize the journey there are several ways to move the needle well number one we can be more efficient the number one challenge for life sciences is data management.

As we all know there's much more data in many more formats than there was a few years ago I'll give you an example of some exposure we had recently we were called to a meeting of a CEO of a major pharma company after doing some highly visible and successful work with them and after execution we recall this meeting and candidly we don't get invited to those meetings not out of the chute we're hands-on we're in the trenches we're working with the scientists were working with the IT people to support them and after explaining this gentleman what we did and God and he understood and understand the value first question he asked is well what's your take on analytics well I thought about it I thought there are two ways to answer the question like that one is to discuss at length our experience with analytics and maybe throw out some possible solutions to that the other one was I didn't know enough about what he was talking about so I asked for clarity so we choose to ask well tell us what you're thinking about analytics tell us about your data and got a smile on his face which was reassuring but this is not unique many organizations are challenged with data they're challenged to number one located data and some issues have issues categorizing and organizing data others have limited insight of the depending amount of data they will be consuming where is it going to come from inside the organization outside the organization what's the impact that it be on IT to support this so first to become more efficient we need to make it easier for people to get to and work with the data we need to think more like an ecosystem approach instead of the traditional functional departments next we need to improve efficiencies right it's we all know that technology cost especially the on-prem cost for rising with respect to our initial investment it's now mandatory we've dictated us to reduce the overhead and finally improve our utilization on this investment with the cloud we have much better visibility to those costs which means we can predict and adjust faster what things are going to cost cloud is making computing simple as Anders said you can run anywhere you can isolate workloads and you can establish consistencies across the business remember as we talk about science demand size and speed but it also demands the ability to scale I'm also here to share with you some costs during the cloud adoption will actually increase over time for those of you the experiences you know what I'm talking about and the reason is often you're running these parallel systems between on-premise systems and in the cloud and you're going to experience an increase in costs until you achieve that crossover however with some specific businesses and workflows you can actually decrease the adoption time and the subsequent costs very quickly so we've seen success with company to take unique approach the way they traditionally operate especially with respect to as we pointed out early scientific

Computing needs we need to think like Dave touched on more like ecosystem approach instead of the traditional functional departments if you begin to establishing a platform model like Dave recommended with turbot and how it can be easily applied to many areas it will make the process much much simpler the final way to improve the efficiency to change something that's repeatable transportable and scalable example is adoption of DevOps model not just for software development but for others in the business it's proven the most efficient which brings us to that second way that you can move the needle the traditional model was we're going to build technology that weren't addressed adjust the service around that now services are being designed for the business which actually makes it easier for IT to manage finally people need to adapt their skills and partner up with those that can help you need to determine whether it's internal external support external support I mean by partnering obviously is best an example although we're a company our state's a company of technical people with a background traditional sysadmin capabilities example traditionally was hardware an application experience with few exceptions all of our folks have now have cloud skills and/or some science background the model is changing and people need to do what they do best as we touched on earlier not every process applies to all errors in the business especially in areas of R&D we need to ID excuse me we need to identify which ones are necessary and which ones are not the only asset more important to your data is the people in your company we need to make it as easy as possible for them to do the job they're skilled for so you need to ask you develop this internal talent do you go outside the organization well whichever you choose the decision has to first be made to allow people what they do best let the scientists do research the IT people do IT it rarely works I'm sure your experience with us it rarely works when those are done redundancy is special in the business side so as Andrew Luda - and Dave pointed out directly the first recommendation we make is to have a dedicated cloud team it sounds simple don't always see in practice but this is not a part-time job you have to have a dedicated cloud team so if you don't have one or can't invest in one for internal resources to do full time or you just need to upend the experience that you have utilize partners like AWS turbine or CH to append that again back to letting people do excuse me back to letting people do what they do best there are amazing new technology is now available especially in the cloud so we need to be flexible and we need to be able empower our users between them AWS and partners like turbot and and AWS are aw Center but if great analytics compute tool development tools artificial intelligence tools stores and of course security compliance control products they aren't just shiny new objects to play with but the real functional products products that are consistently refined and updated and they can be used now without any wait to be engineered for a specific environment the bottom line is because of access to the products and proven use cases they can have immediate impact on your work and let's talk about a specific customer where there was a medium or now as Dave mentioned Takeda had embraced AWS and turbot to begin with in their R&D areas and following implementation of AWS and turbo the customer was in need of some help to execute and the cloud strategy specifically they needed to help maximize business continuity while demonstrating value beyond just a daily operational work so the questions were several how do they improve collaboration as they improve operational efficiencies and how they supplement these IT teams with experienced professionals in a specific area how do they keep it daily operations running and support innovation at the same time this was a challenge they then came to the solution well the solution was we just started off the way you should first you gain insight from all the stakeholders IT the business management and even some of the existing vendors next was to do just as AWS and turbot did by delivering on a foundation this all-important platform from which you can build and demonstrate incremental success and the next thing we did we decided upon projects projects that were not only visible but had the highest chance of success what do we do we designed and implemented an IT platform it served as a small molecule registration system and it also included a related data warehouse next we performed a database migration including design development and testing of integrated services in an AWS for these disparate databases which were my sequel Postgres and an Oracle and what this did was allowed us to append to support an Operations model of the turbine platform finally after demonstrating success our States was forced to become a go-to partner for no the operations work but engineering implementing and then supporting computing at scale excuse me such as HPC big data and analytics projects the result excuse me the results well the results were IT and business deterministic and a success well why was a success it was a win-win everybody was happy business uses the platform and it continues to evolve think about that how often in our IT experience have projects going up where the business continues to use the platform and supported by IT so now they continue to use a platform that continues evolve IT found this to be simpler a much more elegant platform support than a traditional on-premise solution and the process to replicate has become much easier so together we achieve the goal of total transformation and the results were what we all wanted we want a faster better scalable supported and truly innovative for both IT and the business all right so what's our secret sauce what's our secret process a lot of it's not really a secret I'm going to share with what you do first number one is the round discovery objective recommendations so think about that we're proud of the fact that we can provide objective recommendations because with the exception of few technologies were vendor agnostic so our customers find it's best that they can select a better tool to fit the problem rather than the other way around second we align hey we all have different needs but if we take the time to gather stakeholders determine needs and align on goals we're gonna have a much better chance of success third we built for present and future successes of the company this wasn't a one-off project in addition to scalable technologies you need scale the partners there grow with you and then finally once you achieve success once you optimize you continually refine wash rinse repeat this may not see me unique to some of you but the difference is about the process is more about the people in the company again I can't emphasize enough the need for experienced group in a specific area of Life Sciences remember they're generalist and they're specialists regardless of the size of the company so we are going to wrap up with some critical takeaways number one is to help you realize a big company individual goals of lower to lower total cost flexibility security that are management cost controls to get that total transformation you need to adopt the new operating model and that new operating model is what we talked about in the cloud as Dave shared you need to implement the proper platform for controls and predictability that will ensure better management and best practices and then finally again it's like building a house right you need to come in from the stakeholders the owners and then you can put in this proper foundation once you have this in place the proper model the right guard male guardrails the entire work Holt sir can adjust and easily become much more predictable and finally if you remember remember psychology 101 the key to achieving something big or get someone or a team to do something really big is to start small get an early success well in our world you can begin by determining which applications or which workflows can be most easily bettered in the cloud it just so happens that a lot of the life sciences and early discovery development workflows fit that in that model so which which applications which workflows I've at least impacted operations and the greatest impact on the business it's not just about moving from multiple on-premise silos of data and systems it's about making a business more effective and your ability your ability to adopt cloud best practices allow you to get more from your data and make people more productive and finally you can soon realize the potential of all these cloud solutions is a reality of a completely fully integrated collaborative environment so I'd like to thank you on behalf of Andrew and Dave today you have any questions you can contact us set the information below thank you all

If you need any assistance, let us know in our Slack community #guardrails channel. If you are new to Turbot, connect with us to learn more!