1. Different ways to reduce EBS Cost | AWS Cost Optimization | Cloud Cost | EBS Optimization | EBS Type
  2. AWS Pricing Calculator & Cost Estimate Tutorial
  3. Different Ways to Reduce AWS cost | Cost Optimization | EBS | NAT | EC2 | RDS | RedShift | RI | S3
  4. AWS re:Invent 2020: Cost-optimize your enterprise workloads with Amazon EBS
  5. AWS re:Invent 2022 – Optimize price and performance with Amazon EBS (STG204)
  6. How to resize a volume in AWS EC2
  7. How to increase your AWS EC2 Instance Storage (EBS Volumes) with no downtime?

Different ways to reduce EBS Cost | AWS Cost Optimization | Cloud Cost | EBS Optimization | EBS Type

hey guys welcome back to cloud deep dive,in todays video we will discuss,how you can reduce ebs cost from your,aws month libraries,before we start talking about how you,can reduce ebs cost lets first,understand,what ebs is its different types and its,pricing,ebs ebs is a cloud-based storage,provided by aws that is best used for,storing persistent data,ebs are attached to ec2 instances that,enable you to keep,data persistently on a file system even,after you shut down your ec2 instance,even if you stop your ec2 instance data,will still be saved on ebs volume,now as we understand what ebs is lets,talk about,different types of evs volumes so there,are two types of previous volume,hard disk drive and solid state drive,within hard,disk drive there are two different,categories throughput optimized,and cold hdd and,within solid state drive you get general,purpose ssd and provisioned ios,ssd all these type different types of,ebs volume has their own use case,based of your need and requirement you,can choose which type of eps volume you,would like to,use along with your ec2 instance now,lets talk about the pricing,how the pricing work for eps work,to understand ebs pricing i open pricing,and calculator page from aws,documentation,aws charges for provision storage,iops and throughput based on abs types,these prices may vary,based on the region you are working from,as,you can see from the aws pricing page,aws will charge for storage iops and,throughput,for gp3 types of volume for gp2 types of,volume you will be charged,only for the storage not for the iops,and throughput for the io2 type of,volume you will be charged for the,storage,and iops and they have three different,tier within the,i of spec first 32 000 if should be,charged at,0.65 dollar per ifs per month,the next 32 000 will be charged at this,rate 0.046,and then after that after 64 000 diops,you will be charged for 0.032,dollar per provision,one will be diopside on two factors one,is storage and another one is provision,i of spar month,throughput optimized hsd and,cold hsd they both are charged only on,the,provision storage so lets go to the,calculator and see how the pricing,work i am in,ohio region,and here we can see that i have selected,for one instance,and aws consider that for one instance,there will be 730 hours in a month so,thats why we have 730 value here so,730 hours per month now lets try to,calculate price for,gp2 gp2 for 30,gb of worth of storage,and i dont want any snapshot in that,and if you see the calculation,is straightforward this 10 cents per,gigabyte per month and for 30 gb of,storage you will be paying,three dollars per month if i go,and select,i one type of ebs volume,then you have to provide how many iops,you will be,using because io1 is charged based on,the storage,and io one oh sorry iops so,lets say i am saying that i will be,using,1000 iops so you can see,here in the calculation you are charged,based on the storage,which is 3.75 dollars per month,for 30 gb and on top of that you will be,charged for the,iops thousand diops for one month,one instance per month like kind of for,one month and,i have such charges 60 or,dollar per eye of so sixty five dollars,youll be paying for the ios,so the total of six,so sixty five dollar will be paying and,the total of sixty eight dollar seventy,five usd youll be paying for,30 gb ebs volume,and with 1000 ios so thats how the,price is calculated,for eps volumes,now as we understand how pricing work,for eps volumes now lets see,what are the different ways you can use,to reduce the cost,first thing you should look for is to,get rid of orphaned,ebs volumes what does it mean by,orphaned ebs volumes,these are the unused ebs volumes which,are not,attached to any ec2 instance and costing,you money,a great first step is to save money,is to get rid of these type of eps,volumes,so what are the different approach you,can take to get rid of these orphaned,ebs volumes,one way is you can straight away go,ahead and delete these volumes,if you are sure that the data stored on,these volumes are no longer required,or you can take a safer approach as well,where you can first take a snapshot,and then terminate the volume and the,snapshots will be cheaper as compared to,ebs volumes,now the question comes for big,organizations there are many accounts,and which are where the workloads are,working,or executing in different regions how we,can identify all these volumes,from all these different accounts in,different regions so one of the,suggestion which i can make here is you,can use aws config,aws config has a rule where you can,identify,that list me all the unused or available,volumes,it it can give you all the list list of,volumes which are not attached to an ec2,instance,and then you can use aws config,aggregator,to get all the list from all the,accounts in all the regions and then you,can take some remediation action on that,based on the two approaches we just,discussed so the next question comes,why we have these orphaned ebs volumes,to demonstrate that,i logged into my aws console and i went,to,services and ec2 instance and im trying,to launch a new ec2 instance,so lets select this ami,we can select e2 micro which is free,and leave defaults on this page,and here so by default aws will add one,root volume,of 8 gb or based on your instance size,and there is a checkbox called delete on,termination,so what does that mean that means that,when you terminate this ec2 instance,this ebs volume will be deleted,automatically,if you uncheck this box in that case,if you terminate this ec2 instance your,volume wont be deleted or wont be,cleaned up automatically you have to do,that you have to go ahead and delete,those volumes automatically,so lets check that box and lets add,another volume,so whenever you are adding any new,volume,in addition to your root volume this,delete on termination checkbox is,not clicked and what does that mean,whenever you delete,this ebs or whenever you terminate this,ec2 instance,only root volume will be deleted,automatically and the other two volumes,attached to it wont be deleted,or automatically and these will be lying,in your account as the orphaned images,so you have to make sure that either you,enforce a policy,in your organization that everyone click,on this delete on termination,uh based on your use case and if not,then you have to use one of the policy,or one of the,way which we discussed just now that how,you can,get rid of these orphaned images or,orphan device volumes,so the next way of saving money is by,finding out,all the stopped ec2 instances now you,might be wondering that why we are,talking about stopped ec2 instances,while the topic we are discussing here,is about eps volumes,yes so the answer is aw plus does charge,to you guys on ebs volumes even if your,ec2 instances are in stock state,you can terminate these instances if you,want to save that money,so what happened that whenever you,launch any sql instance you get,a root volume attached to it and when,you,stop these instances aws will stop,charging you for the instance,but they will charge you for the ebs,volume because the eps volume is,still attached to your ec2 instance is,not deleted,it will be deleted only when when you,terminate,uh your ecd instance so these volumes,are,will be charged and they will keep on,charging until you terminate those cct,instances,so if you have any such instances in,your organization which are stopped,you can go ahead and terminate those,easy instances if you,no longer need those instances and you,can reduce wasted spend for ebs volumes,for those type of,ec instances and it happens in an,organization that,developers sometimes they launch ec2,instances,and after that we all forget to,determine,it because because its not running and,we we sometime we forget that ebs,volumes are attached to it and theyre,costing money,so thats another way you can find out,all the cc2 instances and start saving,some money,now question comes how i can identify,a

AWS Pricing Calculator & Cost Estimate Tutorial

calculating costs for your aws,infrastructure can be a hairy ordeal,but guess what we have a calculator to,help you out im justin dennison and im,going to show you what the pros know,with all the plethora of services,offered by aws and all the many ways,that you can combine them,sometimes you have to be careful because,that money will run out,before the end of the month and luckily,we have the aws,calculator to help you understand where,your moneys going,and maybe even have a proposal and well,let me just show you about the,calculator if you take a look at my,screen here,if we head on over to calculator.aws,pretty easy to remember we end up at the,aws pricing calculator this will allow,you to,combine multiple services into a price,estimation based on what you think your,usage patterns are,now remember aws is a variable expense,model because you pay for what you use,so if you dont have a good estimate or,youre way off base your price could be,higher or lower but this at least give,you in the ballpark,now how you to go about doing this is i,just want to create an estimate,and we come to the next page and its,funny because you,head on over here and it says step one,step two theres,two steps to this they can get a little,complicated but,lets set the scene lets say that i,need to host some,static images some static files for web,website hosting i tend to turn toward s3,for that,right at least initially if its just a,very simple website maybe we need to,upgrade later but lets start there,all right now im going to search for,amazon s3 when you do though be careful,because there are many services such as,amazon athena fsx redshift that utilize,the s3 service,but is but are not s3 itself so,make sure you pick the right one in this,case its fairly self-explanatory it can,get a little iffy,if youre not careful but right here i,have amazon simple storage service,and im just going to click configure,when you get to this page this is going,to be dictated by the service,what youre seeing on the page because,its going to ask you questions about,how youre using the service and what,components affect the price of the,service,so if we take a look at amazon s3 were,going to pick the region,sometimes there are price differences,based on region for storage,storage type im just going to use us,east,northern virginia i could pick any of,these multitude and notice even govcloud,which is kind of its own little,segmented thing is represented here,so if youre doing maybe a government,proposal or something of that nature you,can get an idea,but im going to stay with us east and,then here its going to ask you,what services or features of,s3 are you going to use a lot of times,many of these services will have a,multitude of features,but here we can say well i want storage,classes so im going to use,s3 standard lets lets just go with,that initially its fairly inexpensive,provides nice performance,and then im going to also have data,transfer because people are going to be,receiving those out right hey can you,give me that image file can you give me,that other image file,some text so those are my two if i,wanted to do,maybe some archiving i may use s3,glacier and i can add that,on if i wanted to do some deep archival,right i need to keep this around but i,need to keep it inexpensive,how expensive is it going to be i can,just click that button and that will,adjust,the fields that i fill in for the,pricing calculator,and if we go down through here because,we selected s3 standard,its going to ask me okay how much,storage per month are you going to use,lets just say i have a lot of videos,and files lets say 30 gigabytes,um theyre not going to be updated too,awful much initially,so uh i have some list requests,lets just go with a thousand right what,about,get select and all other requests right,hey can you give me that,select being hey can you give me a,portion of that information which is,a subservice of s3 uh and,lets just go you know what lets just,go with a million,uh is that a right number of yep,a million there we go and then its,going to ask me,be careful here because sometimes you,may misread what its asking,data returned by s3 select data scanned,by s3 select,im not using select here right so im,going to add,zero and a zero and for the most part,fill in all these fields dont let it,default to anything,because you may get hey thats not a,spread oh i forgot about that,or you know it may adjust pricing all,right so,notice where it says uh its going to be,a dollar,the actual storage isnt that much,right 30 gigabytes is a dollar a little,over a dollar,and thats even with a million get,requests where you get,iffy though is data transfer,whats interesting about data transfer,is you have inbound data,going to aws infrastructure and youre,coming out of aws infrastructure,im going to have data transferred from,the internet or all the regions notice,it says free,but ill just say all right im going to,upload those and its going to be,how many terabytes careful there,im going to say you know what im going,to change that to gigabytes because its,going to be roughly a gigabyte per month,just just maybe right and dont,dont do what i just did and hit back,there because now you have to fill in,all these again but we can do that very,quickly so ill say 30,uh well say a thousand one million,zero zero the the track pad will get you,every now and again,and then from the internet were going,to say one gigabyte,scroll down out though,if its going out to the internet then,okay we have to pay some money if its,going to cloudfront which is a cdn,service,its free if its going to other regions,except for ohio,there is some charges if youre,transferring between regions,um you know from northern virginia okay,so im going to go all right its going,to the internet,and i think because of how many people,are going to interact with this a,million requests,would you really sit down and say whats,the average request size how many,requests do i have,and get a ballpark there uh at least an,estimate,lets say three terabytes per month,right,zoom out and you can get individual,calculations here,right and then you can see a data,transfer estimate,monthly cost of 276.39,okay data transfer is where they got us,total of 277.48,if youre going to use other services,though you dont stop there you go add,that to my estimate,and it goes okay here is your built-up,estimate youre using s3 here are the,costs,that you started with with s3 what if i,wanted to add an ec2 instance,why add ec2 well what if i have to add a,database,well add a database to the estimate and,not only does it give you,monthly cost if youre doing reserve,reservations like for ec2,you can get upfront cost as well as your,first 12 months of total costs,3 400 seems like a lot but,if you have some kind of monetization,its for your business it may be,a worthwhile kind of thing if you want,to add,additional service you can add a service,as such right,and well go back to my estimate you can,add support,right so how do i want aws to support me,because some of those are,paid for and so im just going to go,down here and cancel,i can add group my service group so i,can group these,based on the respective pieces where,they belong,and then in action i can edit but i can,also,export if i export,notice it says provides only an estimate,aws fees okay,and then it goes what do you want its,going to give me a csv,of these estimates which will open up in,your browser i can also,save and share and then you have to,agree to that,type of thing and it will give you a url,to send to someone and they get this,exact same one,i tell you what sometimes it gets a,little iffy im like,how much money am i going to spend but,with the aws calculator,it takes away some of that stress you do,have some,kind of guessing to work but itll give,you an overall feel you can put this as,part of your proposal,you can keep it for your own records,youll be good to go and thats why we,have the aws calculator

More: aws amplify pricing

Different Ways to Reduce AWS cost | Cost Optimization | EBS | NAT | EC2 | RDS | RedShift | RI | S3

hey guys welcome to another video of,cloud deep dive,in todays topic we will discuss how you,can reduce your aws spend,you know today almost all organizations,are facing challenges,with us spike in their bills and its,not because of high aws product cost,aws is doing its best to keep its prices,competitive,but with the rapid growth in our,organizations and more and more,workloads we are migrating towards the,cloud,we always forget to work on cost,optimization and,end up spending more money so in todays,video we will discuss different tools,and ways you can use,to reduce your aws cost before we start,talking about,these different ways to reduce cost,lets first talk about,how you can find out the costs of your,aws services youre consuming in your,organization,one of the tools aws provide is aws cost,explorer,it helps you to view and analyze your,aws cost and users,and to understand more lets go and,login to our adobe,console to see how it can help you,so guys i have logged in into my aws,account,and we will go to aws cost explorer,which you can find under,all services aws cost management adobe,plus cost explorer,under that screen you go to the aws cost,explorer,and you can see it will show you spend,month wise,on all the services you are paying i,like,their stack option so ill choose that,one to get better,really so here you can filter it out,that for how many months you went to sue,last right now its projecting only for,six months you can go for one year,and you can also see for the current,year so well keep it for six months,by default it give you or it group it by,all your spend by service,and you can see for june month i spent,something on app stream on route 53,paid some tax spend something on config,and you see to others and others,so similarly for each month it will tell,you how much you are spending on a,particular service,if you would like you can choose a,particular service from the filters,and then you can see how much youre,spending on,that particular service in the given,month,they have another options as well like,you can group it by accounts so if your,organization have,15 different accounts so you can go to,your master payer account,and you can group your spend,based on your linked accounts and it,will give you that,which account is spending how much money,and if you want to further drill it down,and see that what services,that particular account is consuming,then you can go to the linked account,filter for that particular account and,then you can group it by services,and you can find out that which,particular service in that account is,costing you more money,so now as you understand how you can,leverage cost explorer,to find out which service is costing you,more money,and under which account based on these,findings,you can take further actions and reduce,your cost,now lets discuss different ways you can,use to reduce your costs,so guys i have listed here 10 different,ways you can use to reduce your,aws cost so first one is you can reduce,cost by stop paying for idle instances,and that means like for your ec2 and rds,instances,so what happened in our organization or,whenever we are working on any project,we launch instances in our lower,environment like in our development or,qa,and we forgot to terminate or stop these,instances when these are not required or,maybe when we are going,home in the evening or in the weekend,time or maybe during the holiday time,those,instances are still running even though,we just required those instances for our,testing purpose,but because we forget to stop what,terminators they are still,running and they are still costing you,money and in a big organization you may,find,many such instances which are running in,your non-production environment like,your development,where they are just keep on running,during during the off office hours and,you are paying extra money for all those,instances,so it is advisable that you terminate or,stop these instances if possible,when its not used and when i say tell,me to stop it,based on your requirement on your use,case if,the underlying data you know that is not,required you are done with that you can,just go ahead and terminate those,resources so that,you wont pay extra for the ebs volumes,but if you,need that underlying data so at least,stop those instances during that time,and you will still keep your ebs volume,attached to it and,whenever you need you can just,re-initiate or restart that particular,instance,and that way you will at least save,paying for the compute for that pc,instance,so that is the one way you can reduce,your cost by,kind of either stop or by terminating,your ecd instances,now the question comes that how you can,implement,it in your organization so one way i can,think of,is you can use aws instance scheduler,for this purpose,so what aws install scheduler does it,provide your capability,to create custom start and stop in,circuit schedules for your ec2 and rds,instances,that means that you can you can,configure that,i want to start these ec2 instances on,monday morning at 6 00 am,and i want to stop or terminate these,instances at 6 00 p.m in the evening,so during that time with the office,hours uh these instances will be keep on,running,but the moment uh its off of hours like,6 p.m after that,you may have that nobody is working on,those instances you can bring it down,so that during the night time you are,not paying that cost and similarly you,can create these schedules,for for your different time zones and,different timings and you can save money,so that is the one of the approach you,can take and save so much,so the another way to reduce cost is by,stop paying for ideal redshift clusters,like we discussed for ec2 and rds you,might,be having redshift clusters provisioned,in your lower environment like,development or maybe your,q environment which you launch for the,testing purpose,and you keep in running during your,evening weekend and holiday time frame,and you end up paying extra money for,those for no reason,so what you can do you can save some,money by pausing these clusters during,off hours,and similar way like you have done for,the or,we discussed for the ec2 rds for those,who have aws instance scheduler,for redshift we have a feature called,pulse,redshift cluster and when we use it it,will pose your cluster so that you wont,be paying any money,for the compute the only money uh when,it pauses youll be paying only for the,database house storage,for the underlying storage like in case,of ec2,when we stop an instance we just pay,only for the abs volume we are not,paying for the compute,similarly in the redshift clusters as,well when you pause any cluster,you will pay only for the underlying uh,your data without storage not for the,compute,and if you feel that your cluster can be,terminated that is no longer required,you can go ahead and terminate that as,well,and that way you will save money for,your storage also,uh another thing like i mentioned the,post uh similar to aws,instance scheduler redshift also,provided the post scheduling so you can,schedule that,at what time you want to pause your,clusters and what one,what time you want to reinitiate it and,kind of resume that cluster,so it has that capability as well you,can create your schedules in there by,which you can pause,after your office hours maybe 5 pm or,whatever,so the next way to reduce your cost is,by enabling s3 intelligent tearing,as you know s3 offer a range of storage,classes,and which are designed for your use case,like we have standard storage class,which,you can use for storing data which are,frequently accessed and then we have,infrequent excess storage class like,standard or one zone,where you can store your data which has,not,kind of which are less frequently,accessed,and then you have glacier and deep,glacier archive where you can store data,for the longer term and you know that,youre not going to access it,maybe you might require once in a while,so then those kind of data,

More: jacuzzi hot tub pricing

AWS re:Invent 2020: Cost-optimize your enterprise workloads with Amazon EBS

my name is ashish palleker,im a product manager with the ebs team,and i,take care of our snapshots business uh,welcome to this reinvent session,where we will be covering how to cost,optimize your enterprise workloads with,amazon ebs,heres our agenda for this session a,very brief overview of ebs,then well talk about changes in our,stream volumes how to manage costs for,ebs snapshots and armies with data,lifecycle manager,cover a new instance type r5b,talk about tiered iops pricing with io2,and io2 block express,use of elastic volumes to reduce your,costs,and and finally the integration of ebs,with the aws compute optimizer,lets get started so when we think of,the block storage portfolio,uh within aws we really look at four,different things,we look at the ebs volumes,and and what our volume types are we,think about snapshots,which are copies of ebs volumes that are,stored on amazon s3,we think about instant storage which is,your temporary block storage that is,attached to the ec2 instance,and then we think about data services,for this session,we will focus on the first two,specifically,the volumes and the and the snapshots,pieces,when we talk to customers about why they,choose ebs,uh it really boils down to a few key,things,customers love the performance that ebs,volumes give them for any workload,they get reliability in terms of,availability and durability,and they get ease of use in terms of,adding capacity,changing volume types and in general,being flexible,and they get this on the core foundation,of,virtually unlimited scale without,trading of security,or cost effectiveness,and this is the ebs volume portfolio we,have gp2 gp3,we have io2 io1 we have st1,and sc1 and as you can see across these,volume types,we support a wide array of workloads,from nosql databases,think cassandra mongodb couchdb to,relational databases,think mysql sql server uh sap,postgres oracle uh to big data analytics,like kafka splunk data warehousing uh,and all the way to file and media uh for,sips and nfs uh file storage,for transcoding for encoding for,rendering,but this session is all about saving you,money,and so there are changes across each of,these volume types,that we have introduced that reduce your,overall tco,so lets get to it start with our stream,volumes,so when we think about stream volumes,lets start with,sc1 which is our cold hdd volume,it is designed really for,sequential throughput workloads such as,logging backup copies,as a retention tier really designed for,a baseline throughput of up to 192,megabytes per second,with a burst that gets you up to 250,megabytes per second,uh its capacity ranges from 500 to 16,terabytes,uh it is our lowest cost volume type,offering,so customers came to us about cold hdd,and,said can you make it more cost effective,because uh it allows it would allow us,to store data for longer,uh on on on ebs and thats precisely,what weve done,so as c1 used to be priced at 2.5 cents,a gigabyte month,just in november we have dropped prices,by,40 to 1.5 cents right so if you were you,were paying,x youre now paying 40 less,and that makes us ask the question have,you considered,uh tearing your colder workloads to sc1,because if you arent,this might be a fantastic way for you to,optimize,some of your workloads,so now lets take a look at our other,http based volume which is st1,we call it throughput optimize it comes,with a baseline iops,and burst high ups that take it up to,500 megabytes per second,from a capacity standpoint uh it also,goes much like a c1,from 500 gb to 16 terabytes and it is,really designed,for large block high throughput,sequential workloads,so when we looked at st1 and sc1 one of,the other areas that customers gave us,feedback on,was we love what theyre doing but man,we have workloads,that need far less than 500 gb can you,look at reducing,the minimum size of these volumes,because that would save us money,and thats exactly what weve done so,new at reinvent,what weve done is started with the,minimum size of 500 gb which which is,what it used to be,and both for sd1 and sc1 we have reduced,that size,to 125 gb thats a 75,reduction in the minimum size,how does it make a difference to you so,heres an example,lets say in this case a customer needs,100 gb,of cold hdd volume for their workload,prior to these changes they would,provision,500 gb at 0.025,dollars per gb month which meant that,within that month,for that volume theyd be paying 12 and,a half dollars,with this change what they can do is,now provision 125 gb since thats the,new minimum size,and they would be paying a cent and a,half per,gb month which means that their total,cost,is 1.875 dollars that is 85,lower the combination of these two,changes,mean that we we would see more customers,looking at these volumes as ways to,tier their storage within ebs and and we,are excited,uh how customers will use our st1 and,sc1 volumes,next we take a look at how you can,manage your snapshots and armies with,amazon data lifecycle manager,lets start with the core problem,statement customers used to take,snapshots all the time,and and they came to us and said we,really dont have a means,to take snapshots and and define how,long to keep those snapshots,and the net result was customers uh,snapshots would proliferate,they would have copies and really didnt,have a good way to manage their costs,so about two years ago we launched,amazon data lifecycle manager,and what it does is automates the,snapshot lifecycle management,you can set policies that allow you,to either at the volume level or at the,ec2 instance,level take regular snapshots and retain,copies of those snapshots,and fully integrates with cloud,formation,so if you are looking at snapshots and,snapshot costs,you can set policies on when snapshots,are taken,and how many snapshots to keep in a,lineage and we use cost allocation tag,to keep tracks of snapshots,heres heres the screenshot of how you,would do it youd create a lifecycle,policy,you would then set a policy schedule in,this case its one every two hours,and then we keep 24 copies around,and that defines how many snapshots and,customers have,used data lifecycle manager for a lot of,different use cases,and weve been busy making further,improvements,to dlm weve now we initially started,with 12-hour and 24-hour cycles,weve now gone to one hour customers can,now use cron expressions,policies creation is simpler and,customers can store,multiple schedules within a single,policy we now allow time based retention,and so customers can do daily weekly and,monthly schedules on their snapshots,and now you can also copy snapshots,across regions where dlm policies,all these give you a rich tapestry,of options in in order to store your,snapshots,and manage your costs but we didnt stop,there,customers came to us and said you know i,love that youre managing snapshots,but what about amazon machine images and,turns out,amazon machine images which are think of,them,as copies that are used to boot ec2,instances so if youre booting an ec2,instance youd use an army,they are backed by snapshots and turns,out customers were taking those,or creating those armies much like they,were creating snapshots,and they were keeping copies of those,armies especially as their build images,change,they would create new armies and so they,had proliferation challenges on armies,as well,so what weve launched and what the team,did is take a look at that problem,and essentially now incorporate army,life cycle management,also within dlm so you can now identify,your ec2 instances that you need to back,up,you can automate the retention and,cleanup of armies,you can control costs by deleting,snapshots of de-registered armies,you can retain backups for compliance,and auditing and all of this,is free to use right so again if,a management of your snapshot and army,costs,is a challenge then think about using,amazon data lifecycle manager,for uh for your workloads so we didnt,stop there,now lets we took a holistic look from,the instan

AWS re:Invent 2022 – Optimize price and performance with Amazon EBS (STG204)

– All right, hello.,Hows everyone doing today?,Excellent, Ill take Excellent.,Welcome to “re:Invent 22”.,This is actually my first time here,,which is super exciting.,Theres so much energy.,Anyone else, first time here?,A few, wow, okay. (Prarthana chuckles),Anyone else here for the second time?,Okay,,third time?,Okay, wow.,All right.,Welcome.,So my names Andrew,,Im a Product Manager here at AWS,,and Im joined today with Prarthana.,- Im Prarthana,,I am a Software Development Manager in AWS.,- Yeah, and were here to talk about,optimizing price and performance with Amazon EBS.,So, I hope youre in the right place.,So were gonna start with an overview,of storage options in AWS,,theres quite a few.,And then well move on to EBS, specifically,,well talk about the portfolio,,what options are there,,and how you can think about optimizing.,And well talk about some customer examples,where theyve taken a workload,,an app,,and they found ways to save costs,,or improve performance.,And then Ill hand it off to Prarthana,,and shell talk in detail,around different workloads and apps.,So, in AWS we have Object Store,,we have File storage,,and theres Block.,And those are the three primary storage types.,And around those we have data services,,and we have Hybrid and Edge,,but were not gonna talk about those today.,And with Object Store and File,,you access the data through protocol,that uses metadata.,And so when you access Block,the difference is that you go directly,to the datas,,the ones and zeros.,Its very fast, its very performant.,And so, Object Store is a great place,to put data that youre gonna be using,for things like mobile apps,,data that is going to not change very often,,or to build Data Lakes,,its a great place to do that.,And file store,,theres a lot of options there;,its everything from, you know, scale out,,if we wanted everyone here to have access,to the same files,,thatd be a great place to start.,But were gonna talk about Block today.,And if youre not familiar,,Block,,its everywhere.,In fact, its in your pocket,if you have a phone there,,its in my watch.,Block is really a ubiquitous,storage type,,in fact, File and Object are built using Block.,So, its pretty cool,,its the center of the slide,,its the center of the storage universe, right?,And so, were gonna go into detail here,of what is Amazon EBS?,So we know Block storage is performance,,and so what is EBS?,So, EBS is a easy to use,,secure,,high performance Block storage service.,So instead of managing the hardware,,you think of with EBS,,things like volumes,,its a logical device,instead of a physical device.,And you use it with EC2,,so in fact,,if you use EC2,you already use EBS.,So, is anyone here familiar with EC2,,use EC2?,A few, okay.,Yeah, so you already use EBS, its great.,And theres two main types of EBS,that we should talk about;,the first is instant store,,and this is just like your phone storage,,its physical storage,attached to the EC2 instance.,And so its not separable,,if the instance goes away,the storage goes away,,if you terminate it, the instance,,the storage is gone.,And then theres network attached EBS;,and thats what were gonna talk about here.,And with network attached EBS,,its built on a massively distributed storage system.,(clears throat),And so this lets you get a lot of performance,,the Block storage performance you expect,,but also your storage lives,outside of the instance life cycle.,So you can move the storage around,,you can switch instance types,,and you still have access to your Block storage.,And so,,were gonna focus on the network attached EBS today,,the managed volumes.,And in there we have two main families,that were gonna talk about;,and one is SSD-backed,,so you can expect performance you would get,from an SSD device,,and the other is HDD.,And this will give you the price performance,of the HDD hardware.,And both of these are flexible,,you can use elastic volumes,,data services, to switch between types,,switch performance characteristics.,You can create these via the API,,SDK, the Management Console.,And there are backups that are available,,you can take Snapshots,that go to S3,,if you have data retention policies,that you need,,or you wanna have the data available later.,And so, Im gonna take a second here,,theres a ton of stuff weve done,since we launched in 2008.,But here were gonna talk about,a few key ones,that relate to optimizing price and performance.,So, in 2008 we launched,with Standard and Snapshots,,and Standard is our first volume type,,it was an HDD volume type.,It offered a 100 input output operations per second.,So thats what we call IOPS,,and that really is a measure of performance.,And so, a 100 IOPS,,and they were best effort IOPS,,so sometimes you might not get them.,And so, that was where we started.,And so, in 2012 we launched,our first SSD volume type,,and this was called Provisioned IOPS.,And we chose the name to be very descriptive,,so you can Provision IOPS,separate from storage,,but also youre not getting best effort IOPS,,youre getting provision IOPS.,So, youre gonna get the performance youd expect,from those IOPS.,And along with that we actually updated the EC2;,so keep in mind were thinking about optimizing EBS,,you also wanna think about EC2,,cause the relationship there is tightly coupled.,And so, EBS optimized,,what we did is we gave a dedicated network attachment,for EBS.,So you can separate your storage network,from your EC2 front end network,,and something well talk about,,again, Ill talk about,,and Prarthana will talk about a little bit later.,And so, now we have Standard,,spinning disc,,press performance of spinning disc.,We have Provisioned IOPS SSD,,which is SSD that you can provision performance,,its really reliable,,its a great volume type.,In 2014, we did a general purpose type;,and again the naming,,we were trying to be very descriptive here.,It works for the majority of workloads,,and we see the majority of customers start here.,This is where you should start,,it gives a balanced performance of the SSD portfolio,,it gives you the low latency,,single digit millisecond of SSD,at a price point that is between,the HDD and the Provisioned IOPS.,And so, now we have the SSD family,,two volume types so you can choose,which one works for your workload.,And then in 2016,,we had a new generation of HDD,,and this was really focused on streaming,and throughput workloads.,So if SSDs are really good at IOPS,,streaming and HDD workloads,are really good at throughput.,And so, there are two types here:,sc1, that is for colder storage,,infrequently accessed storage,,and st1, which is great for streaming workloads,,transcoding workloads,,things like that.,And so, now if we step back,,we have two families,,SSD, HDD,,we have two types.,So you can really find the performance,and the volume type you need for your workload.,And what we heard from customers is that,,you know, we want a simple way,to move between those types,,or even scale those volumes,if you need more storage,,you dont wanna have to recreate your instance, right?,And so,,we came up with elastic volumes,,and this lets you do just that.,Ive heard it described as the feature,that gives you your weekends back.,I think when I was an engineer,before I came into product,,if we had to get new storage device,or new compute in our closet,,we had to take the weekend off, right?,You had to have downtime,,elastic volumes gets rid of all of that.,And so, you can change the size, increase it,,you can increase or decrease the performance,,and you can switch between types seamlessly.,And so, one use case for this,is if you have say end of the months reporting,that you need to do,,it needs to be done a short amount of time,,you can switch from,,say the general purpose,that works for the majority of the time,,revision IOPS,,and get the higher performance that you need,for the end of month.,And so, thats the state of EBS 2017.,And in 2018,,we launched Data Lifecycle Manager,,and this simpli

How to resize a volume in AWS EC2

today I want to show you how to resize,the volume on AWS so this is an ec2,instance and it doesnt have in a hard,drive space lets say theres something,like a database on it thats growing or,I need to add more applications and I,didnt provision a space here Im going,to show you how to expand the existing,volume on the server so first of all Im,going to show you how much space is on,the server by logging in and were going,to check it so here IP address and were,going to SSH into that box,okay so DF – H and as you can see here,is separable indicated by a hard drive -,attached how can we find this on the AWS,control panel its easy you take the,instance ID and you go to volumes and,you could look for the volume thats,attached by searching for it in the,filter here and as you can see a,gigabyte volume attached to resize,volume example just just like we saw in,the instances thing next were going to,provision more space on this were going,to change a gigabytes to 16 gigabytes so,go here Im with panel modified volume,yes now may take a few minutes for it to,change I found that if youre just,waiting on this screen you might have to,refresh it so lets keep refreshing it,until ok it moved all right so now its,16 gigabytes,now were logged in to this box and then,if I do D F H it doesnt recognize that,companies try logging in again new,session d F H still 57.8 kilobytes on,the hard drive and the reason for that,is because you need to do another,command called resize,but first we want to check the kind of,file system this is because that will,tell us which resize command to use and,this should be an ext for system yes it,is x3d a one-volume type is XCX t4 which,means we can use the resize to FS,command so we pseudo resize to a nicety,volume that we want and when Ive done,it Ive gotten this weird error message,but it still works there DF h no its,still showing some point 8 gigabytes so,I exit SSH back in yeah it still says,that so amacom says that you dont have,to restart these instances but my,experience here you do so were just,going to go back and reboot this,instance ok so Im gonna go back to my,instance and resize my example and I,want to reboot it ok,so stone ssh back head when it comes,back up give you a second okay so were,finally back in now TF h and look weve,got our 16 gigabytes thanks for watching

How to increase your AWS EC2 Instance Storage (EBS Volumes) with no downtime?

welcome back in this tutorial im going,to show you how to increase your aws ec2,linux instance ebs volumes with zero,downtime,your ebs volume is basically your,storage,if your ec2 instance is running low on,storage,then you can easily increase the storage,without having any downtime on your,server,its a very easy process so lets get,started,now the first thing we want to do is go,to our amazon homepage and from here on,the top lets search for ec 2,now from here we want to go to our ec2,instances so we click on instances,and in here we want to find the instance,that we want to resize,so in my case it is this one,demo instance,so,the first step is to go to storage in,here,and,this is our volume so we want to resize,this volume so we click on it,and from here were going to be,inside the volumes,so this is the volume that we want to,resize,and we click on action,and modify volume,and from here lets go ahead and resize,our instance so right now we have eight,gig lets make it 20 gig,and lets click modify,and lets click yes,and thats done,so far we increased the size for our ec2,volume,now we need to,ssh into our instance,and we need to extend our volume so that,we could actually increase our storage,so we want to basically extend the,actual partition,so lets go to our terminal and you,wanna ssh into your,ec2 instance,now the first thing you wanna do is you,wanna list your block devices,so you write ls,blk,now in here,if you take a look on the bottom,this,xvd a1,would be your original hard drive so,right now we have eight gig,and right here we have our 20 gig that,we just increased,so we want to increase our partition,now to increase this partition,we write sudo,grow parts slash dev,slash,xvda so this is the name,xvda,space and we want to add number one,to represent this partition,and lets go ahead and do that,now lets check what happened to our,partition so we write ls blk,and you can see now,that our volume is 20 gig,now our last step is to basically extend,our file system,so we do that by writing,sudo,resize,2fs slash dev slash,xvda1,now lets check our file system so we,write df dash h,and in here,we should be able to see that our,volume got extended,thank you for watching dont forget to,like and subscribe to my channel and,well see you on the next video

Categorized in:

Tagged in:

, ,