Shrink database how long
March 23, at am It depends. I know that answer blows, but it really does depend. Using Management Studio logged on to the server. No blocks. Only GB remaining on the server. Do you have lots of large data types or images on this database?
No images. I believe bigint, decimal 18,0 ,varchar max are the biggest data sizes in the DB. March 24, at am March 25, at am I have finite space available and a database with all tables and indexes partitioned by month. Each months data files end up being about GB. After 6 months the older partitioned tables are detached from the database and moved to an archive database.
If I leave those recently emptied partitions at their size and do not shrink them, then I run out of drive space over the next few months. Your recommendation? What are you gaining from different filegroups?
Admittedly the why of it may be a little over my head as some of this process is controlled by the 3rd parties built in database management piece. My running assumption has been that this is done to facilitate the quick detachment of old data as well as all the other benefits of having partitioned tables and indexes.
Those tables are then moved to the archive database, but the datafile for that partion is reused the next time that month of the year rolls around. It sounds like you just need to switch to one filegroup for the data. After digging around some and talking to people more involved in the setup of this system. It sounds like they went with different file groups for HA reasons. Sometimes the architecture design can outgrow the initial requirements, and a re-think is required.
Many a time we have planted small acorns, which have developed into massive forests of data. Yes, shrinking can be bad, but asking that the whole project be re-architected, as it has morphed into something that was never anticipated can be even more expensive. Which then brings up a new discussion, what to do when the project design was completely off the mark? As soon as you realize that the design was off the mark, you need to review it, pretty much from scratch, to see what impact this has and what can or should be done about it.
Your review may determine that you do not need to make any changes but, if you do need to make any, it is almost always easier to do this sooner rather than later. Adding some formal flexibility rather than re-using fields informally to provide this flexibility may be all that is needed. Is it that the design did not anticipate the growth of the system and so it is now too slow or cumbersome or needs too much active management?
It is also possible that you will see that the design is wrong, but not be sure how best to fix it because the use of the tool is still evolving. That is fine… at least you know to have another review in a few months or a year when the system is more stable. This may also prompt you to add some monitoring tools to track further developments or uses so that, when you do finally make changes, you have a clearer idea of the impact they will have.
It impugns the character of the person saying it. Doran — guess I lack imagination. You certainly sound creative though. Thanks for stopping by! Hello, Brent. The DB was about 2TB. So i drop those tables. And have 2TB base with 1. I decided to try shrinking on test env. Is there any smarter way? It seems to me, that it is simplier to move everything from this db to a new one except for those two tables.
Yeah, shrinking a database that large can be pretty tough. I would just do it gradually over the next couple of months, doing as much as you can during weekends or low usage windows. You mean shrink it in smaller steps. Like, for example, 10GB at a time? Sounds reasonable. Thanks Brent! And double thanks for lightning fast response. This has been an interesting read I must say.
I have a database that has 16 db files not including t-log files. It appears that all of them were created at 1 GB but only three were ever set to grow. All of the files are in the same group, on the same drive, in the same folder. The question is, would there be a similar performance impact as is discussed here if I were to empty and then delete these files.
If so would reindexing resolve the impact? BizTalk has 3 databases and is quite the blackbox, one of which grew really huge for the kind of activity we have 43G. Something I had to research. I feel very ashamed because I want to reclaim that space with a shrinkfile but the idea of having to do this makes me shudder.
By all means, link to the articles so that readers can get a more in depth understanding, but at least have a basic gist or outline. Kay — I am sorry for not repeating what the other articles said. I did you a grave disservice by not giving you a brief, concise, repeated version of what they said. I like spoons. Spoons are lovely, great shape to them, and way easier to use than chopsticks. Ah, yes, the traditional fall-back tactic of the blog bully. Not reading any more of this crap.
Welcome to the blog readership! Good to have you around. Where only 1 user is active instead of Do you know how much cloud based computing power and disk space costs? So are you saying you shrink your database daily to keep the read-only test systems small? You had enough space to restore it. That means your shrink operations are slowing down end user activity. Exactly how many sports tickets, steak dinners, and holiday presents is your SAN admin buying you?
Dude, seriously, put down the database. Time to start doing some reading. You end up with a lot of space lying around unused anyway. The ability to maintain useful disk storage capacity in a system is far, far more than the cost of the bytes of storage. Solid state storage is literally included for free with most AWS instance types, and hard drives are included with most of the rest. You would have to rent that for five straight years to match the price of a single core of SQL Server Enterprise Edition.
Aaaaannnd my point flew right over your head. Ignoring ancillary costs, opportunity costs, and situational costs, and caring only about price-per-byte costs. Which is exactly what I was NOT talking about. By focusing on the price of storage, I missed the opportunity to spend my time shrinking data files.
I forgot how many people passionately enjoy that pastime. Many users are working on that and we pro-actively grow the data file. And leave a bit of air for the users e. You never make any distinctions between types of environments, which is THE distinction to make here.
I agree with Bas for test environments. Your opinioon is same for a non-production environment, with limited space, that receives many databases from production server, with unecessary space using free space that can be used to another databses restores? As he noted earlier and will probably confirm — the post is largely regarding production. Non-prod environments often require truncating some data and recovering the space.
My team is genuinely curious as to your thoughts on our specific situation. This DB grew uncontrollably a few years ago due to a configuration issue read: oversight related to the application that relies on it. We expect the DB to grow, but it will likely never reach this size again. Shrinking this DB will hopefully be a one-time occurrence. Then migrate and do the same thing regarding performance metrics and user operability.
I tested, and retested, and retested before letting it loose. I am definitely not a DBA but am temporarily serving in that capacity. I am currently dealing with a database that is taking up all of the 2TB that have been allocated to it. The file autogrowth has been set to unlimited. Any ideas how to proceed? Seems like some shrinkage is going to be needed.
And perhaps limiting the autogrowth? So, not one of you ruder jerks ever deletes data, ever, or has disk space issues at an organization that has a budget.
Good to know. Chillax bro. If you need to shrink your DB, shrink it. But if you are doing it to the point of creating daily process or automating the task, you have a bigger problem.
You realise you have a bigger problem? You now know something you may not have paid attention to before. We run a scanner server software which logs jobs all throughout our office. This audit table has very little useful information in it for historical purposes or reasons of keeping — and yet, because it logs some xml stuff AND has data for 4 years, that single table has grown to 56 GB while having the entire database only 62GB — in our case, no maintenance has been written for this database.
In the end, I should never inflate this database too much and also keep it nice and clean.. I mind you, this server is new to my responsibility. Sometimes being right counts for nothing in the professional world. We have a COTS product that was configured by a predecessor and has dumped loads of historical data to a single DB that has grown to over 1. I am working on the config issue to drastically reduce the data dumps, but would like to get all this space back. Are there better ways?
I just find it kind of weird that you have this over-the-top rant about how horrible it is to shrink database files due to the fragmentation it causes. Todd — great question! Because people overly care about both: they care about shrinking, AND they care about fragmentation, which makes it really bad.
Guess what SQL Server needs to do? I originally posted back on December I should also mention that I am a different Tim S. Finally, we were advised to split the Data file into 8 files. Adding those files was easy but the only way to get SQL Server to USE those extra files was to shrink the original file to an eighth of its original size… And that took forever about two months!
I did run it in Single User mode overnight, but that did not seem to improve the performance. I suspect that it would have been much quicker to create a second database and copy the data from the first database to the second one using an automated script to preserve things like Database Triggers, Constraints, Indices, Identity field seeds and increments etc. And then, finally, to backup the new database and restore it over the original one.
This may have needed some downtime, but taking the system down for a weekend and getting improved performance afterwards would have been much better than having to wait 2 months for that improvement.
This task was a one-off and so, while it would have been nice to have a quicker solution, I might never need it on production again. The Backup time, even for compressed backups, seems to relate to the total database size, even if the database is mostly empty and so it would be nice to be able to shrink the Test Databases but there is no point if it takes 2 months to run. Again, the automated script to literally copy a database sql object by sql object would probably be the best way to do this.
And, if not, could you produce one? Brent Ozar — I have seen lot of comment from you but I have not seen any reason except fragmentation and that can be easily handled in offline mode. Peter — so how did it go? Can you describe what happens when you try to shrink an offline database?
Yeah, you are saying things to experts here that are simply incorrect. After reading this post and all of the comments, I am left to conclude precisely what I have always suspected might be true: a large percentage of DBAs have no idea what they are doing. My case is the following one:. I have a. I have tried to import it into SQL server with the following command:.
The generated files A. It can be running for hours. Which would be the best way of restoring this. All you can do it wait. How long have you waited? We have a table that we use that only keeps 2 years worth of data, so it can delete thousands of rows every night that are more than 2 years old.
What about shrinking the log file? What are opinions here? I have a gig DB with a gig log. The DB has a nightly full backup, with incrementals and transaction logs throughout the 24 hours. Yeah, a GB log file sounds problematic. It is a large vendor product. Backup times are quick. I have run a health check and so has my product support vendor.
Reorgs and rebuilds run on the weekends. Other ideas? I really put a lot of work into that so that it could give you personalized advice for your own system, way beyond what I can do in a blog post comment.
Give that a shot. I hate everything Microsoft in the server space and this fast tracked me in the right direction as to why our MSSQL server was running out of space every four months.
I just dropped some unused tables and purged some old data. This database is copied to six other environments, and that takes time, network resources, and drive space. Am I not supposed to shrink it? And even if we did, we would still benefit from the savings in maintenance window time, network traffic, and drive space now. Most of us have to deal with rapidly growing databases, especially in this hyped-up age of Big Data.
No, I do not regularly shrink databases. My concern is that when a recognized authority such as yourself advises to never shrink databases, the presents potential problems when I suggest that we do so. So a small caveat explaining the rare times when it is appropriate to shrink a database would be a welcome addition to the article.
Yes: If you have to shrink your databases frequently then you have an underlying issue that should be addressed and you are just putting a band aid on it. But I believe there are acceptable applications of the shrink feature.
One-time shrinks seem to be ok when they are being used to resolve a rare failure. Hi Brent, I just have a question. We recently purge 1,5 millions of record from a table. How can we do it? Thanks in advance! Now how I can reclaim disk space? Any option apart from data file shrinking? That way, when you archive an old partition, you can simply drop the old filegroup. Presto, instant space reclamation with zero fragmentation.
Hi Brent, in my case, my backups are taking way too long. Start with conventional backup performance tuning, like using compression, writing to a fast target, striping across 4 files, etc.
Please stay us up to date like this. These days, we are running VMware esxi on flash storage, to where, autogrowing a file takes milliseconds and shrinking a database has FAR few repercussions it used to have.
I run an chassis datacenter. As an Infrastructure Director, we will lose money is these sorts of scare-tactics actually affect customers these days. What DOES affect customers though is the ever-increasing data footprints they have because they have been scared into NOT shrinking their datafiles. Unbeknownst to them, some employee who does not have SQL training is running ODBC to this db and constantly screws up queries that lock up the server, so the log file grows and grows and never gets reclaimed.
Furthermore, this may be something done ONCE, 5 years ago…yet the infra guys have to provision for it? THEN and only then will you get insight into what processes influence it. This simply does not apply today in anything other than the largest datacenters and most inexpensive home-labs.
SAS drives. If autogrowth happens, you have a lot less places to look as to why they got so big in the first place. Adjust accordingly. Brent… I have a database that we need to make static, and share out to developers. I have a situation with Database size.
Then that data goes into production tables. Our staging tables store much more data then we actually import but we need to store the data as it comes from the client initially. After the data in saved to prod tables we can remove the staging data.
Unfortunately, we need to keep the original data around for 3 months while we finish the implementation. Once we go live we can purge the old staging data. During the staging process the DB can grow more then 2 -3 times its normal running size.
Changing the application at this point is a major undertaking. Once we purge the staging data we have a database that is huge 2tb while normal size would be about gb.
We have over dbs. Would this be a situation where shrinking would be an acceptable exception? With autogrow set, our log files grow quite large during especially at the end of month were a lot of transactions are done in short period of time.
I do have a log backup scheduled every 15 minutes but sometimes it is not enough to catch the overload of data movement. So … the log files are growing a lot and even after the log backup and full backups they stay at the same size unless shrinked.
With over databases in the server and not all log files growing at the same time, sometimes disk space is low. What your option on this?? Now at the end of the month I have another log file that will grow to GB because of major transactions, this brings my drive to GB close to full not counting the other log files and like the first log file, after the major transactions done, the log file will go back to 50GB of daily flow.
I am making this quite simple but in reality I am dealing with drives of 1TB for my log files when in reality I could work with GB drives just because the logs grow but not at the same time. And this is for only one server. Take care. Performance does matter, I shrink the log file during off hours times.
But I do understand you point that when the log need to grow back again it will impact the performance. Your email address will not be published. Don't subscribe All Replies to my comments Notify me of followup comments via e-mail. You can also subscribe without commenting. Post Comment. Want to advertise here and reach my savvy readers? Stop Shrinking Your Database Files. Last Updated October 9, Brent Ozar. Production Database Administration. Epic Advice Fail SQL Server Magazine just tweeted about their latest article, a reader-submitted solution on how to shrink your database files with ease.
Leave new Aaron Bertrand. Please stop telling people they should rhink their log files! Brent Ozar. Allen McGuire. How to reduce the transaction log file size without shrinking in sql server?
The largest table size is 8GB. Joe Gambill. Steve Mangiameli. You have better bigger more important things to do than reclaim space. Simon Holzman. Any thoughts other than tearing up my DBA Card? Coach James. This article makes me more angry than the article author is.
Parvinder NIjjar. Michael Spurlock. When is Microsoft going to address this dog-ass design? Allen M McGuire. Tim seedorf. Situations dictate. Paul Randal. Dave Schutz. When I checked the status of the task at 1. There is still a ton of space available on the drive. Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams?
Learn more. Asked 3 years, 7 months ago. Active 3 years, 7 months ago. Viewed 5k times. Running SQL R It's been running for 2. Improve this question. MDCCL 8, 3 3 gold badges 28 28 silver badges 53 53 bronze badges.
Please add you SQL Server version.
0コメント