<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://tech.muninn.land/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Luca</id>
	<title>tech - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://tech.muninn.land/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Luca"/>
	<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php/Special:Contributions/Luca"/>
	<updated>2026-04-15T18:00:21Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.38.4</generator>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=41</id>
		<title>Recompile ffmpeg, is it worth?</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=41"/>
		<updated>2026-02-22T15:14:46Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Recompile FFmpeg: is it worth? =&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
This article documents a controlled benchmark study aimed at evaluating whether recompiling &#039;&#039;&#039;FFmpeg&#039;&#039;&#039; with different compilers and optimization flags provides measurable performance benefits on a &#039;&#039;&#039;first-generation AMD Ryzen CPU&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The focus is on CPU-bound, real-world video transcoding workloads, not synthetic microbenchmarks.&lt;br /&gt;
All tests were performed on the same machine, using identical inputs and methodology, varying only the compiler and compilation flags.&lt;br /&gt;
&lt;br /&gt;
== Test System Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
* &#039;&#039;&#039;CPU:&#039;&#039;&#039; AMD Ryzen 7 1700 (Zen 1, 8 cores / 16 threads)&lt;br /&gt;
* &#039;&#039;&#039;RAM:&#039;&#039;&#039; 16 GB&lt;br /&gt;
* &#039;&#039;&#039;Storage:&#039;&#039;&#039; SSD&lt;br /&gt;
* &#039;&#039;&#039;Architecture:&#039;&#039;&#039; x86_64&lt;br /&gt;
&lt;br /&gt;
=== Software Environment ===&lt;br /&gt;
* &#039;&#039;&#039;Operating System:&#039;&#039;&#039; Linux Mint 22.3 (Cinnamon, 64-bit)&lt;br /&gt;
* &#039;&#039;&#039;Debian base:&#039;&#039;&#039; trixie/sid&lt;br /&gt;
* &#039;&#039;&#039;Kernel:&#039;&#039;&#039; 6.8.0-90-generic&lt;br /&gt;
* &#039;&#039;&#039;CPU governor:&#039;&#039;&#039; performance (fixed during benchmarks)&lt;br /&gt;
&lt;br /&gt;
=== Toolchain ===&lt;br /&gt;
* &#039;&#039;&#039;FFmpeg (system):&#039;&#039;&#039; 6.1.1-3ubuntu5&lt;br /&gt;
* &#039;&#039;&#039;GCC:&#039;&#039;&#039; 13.3.0&lt;br /&gt;
* &#039;&#039;&#039;Clang/LLVM:&#039;&#039;&#039; 18.1.3&lt;br /&gt;
* &#039;&#039;&#039;Assembler:&#039;&#039;&#039;&lt;br /&gt;
** NASM 2.16.01&lt;br /&gt;
** YASM 1.3.0&lt;br /&gt;
&lt;br /&gt;
=== External Libraries ===&lt;br /&gt;
* &#039;&#039;&#039;x264:&#039;&#039;&#039; built locally from source and linked dynamically  &lt;br /&gt;
  (system x264 package not used)&lt;br /&gt;
&lt;br /&gt;
== Test Material ==&lt;br /&gt;
&lt;br /&gt;
All input files are unmodified Blender Foundation open movies.&lt;br /&gt;
Files were not trimmed, re-encoded, or altered in any way.&lt;br /&gt;
SHA-256 checksums were recorded to ensure bitwise-identical inputs across all tests.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! File&lt;br /&gt;
! Resolution&lt;br /&gt;
! FPS&lt;br /&gt;
! Codec&lt;br /&gt;
|-&lt;br /&gt;
| Sintel.2010.1080p.mkv&lt;br /&gt;
| 1920×818&lt;br /&gt;
| 24&lt;br /&gt;
| H.264 High&lt;br /&gt;
|-&lt;br /&gt;
| Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film.mp4&lt;br /&gt;
| 1920×1080&lt;br /&gt;
| 60&lt;br /&gt;
| H.264 High&lt;br /&gt;
|-&lt;br /&gt;
| Tears of Steel - Blender VFX Open Movie.mp4&lt;br /&gt;
| 1728×720&lt;br /&gt;
| 24&lt;br /&gt;
| H.264 High&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Benchmark Methodology ==&lt;br /&gt;
&lt;br /&gt;
=== Transcoding Scenario ===&lt;br /&gt;
All benchmarks used a realistic end-to-end transcoding workflow:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Video encoder:&#039;&#039;&#039; libx264&lt;br /&gt;
* &#039;&#039;&#039;Preset:&#039;&#039;&#039; slow&lt;br /&gt;
* &#039;&#039;&#039;Rate control:&#039;&#039;&#039; CRF 18&lt;br /&gt;
* &#039;&#039;&#039;Audio:&#039;&#039;&#039; copied bit-exact (-c:a copy)&lt;br /&gt;
* &#039;&#039;&#039;Threads:&#039;&#039;&#039; 16 (matching logical CPU threads)&lt;br /&gt;
&lt;br /&gt;
Example command (simplified):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ffmpeg -i input \&lt;br /&gt;
  -c:v libx264 -preset slow -crf 18 \&lt;br /&gt;
  -c:a copy -threads 16 output.mkv&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Measurement ===&lt;br /&gt;
* Tool: &amp;lt;code&amp;gt;/usr/bin/time -v&amp;lt;/code&amp;gt;&lt;br /&gt;
* Primary metric: &#039;&#039;&#039;wall clock time&#039;&#039;&#039;&lt;br /&gt;
* Secondary metrics:&lt;br /&gt;
** user CPU time&lt;br /&gt;
** peak RSS memory&lt;br /&gt;
* &#039;&#039;&#039;Three runs per test&#039;&#039;&#039;, arithmetic mean reported&lt;br /&gt;
* Locale forced to &amp;lt;code&amp;gt;LC_ALL=C&amp;lt;/code&amp;gt; to ensure numeric consistency&lt;br /&gt;
&lt;br /&gt;
== Tested Configurations ==&lt;br /&gt;
&lt;br /&gt;
=== Baseline ===&lt;br /&gt;
* System FFmpeg package (distribution build)&lt;br /&gt;
&lt;br /&gt;
=== GCC Builds ===&lt;br /&gt;
* GCC &amp;lt;code&amp;gt;-O2&amp;lt;/code&amp;gt;&lt;br /&gt;
* GCC &amp;lt;code&amp;gt;-O2 -march=znver1 -mtune=znver1&amp;lt;/code&amp;gt;&lt;br /&gt;
* GCC &amp;lt;code&amp;gt;-O3&amp;lt;/code&amp;gt;&lt;br /&gt;
* GCC &amp;lt;code&amp;gt;-O3 -march=znver1 -mtune=znver1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang Builds ===&lt;br /&gt;
* Clang &amp;lt;code&amp;gt;-O2&amp;lt;/code&amp;gt;&lt;br /&gt;
* Clang &amp;lt;code&amp;gt;-O2 -march=znver1 -mtune=znver1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All FFmpeg builds:&lt;br /&gt;
* were linked against the same locally built x264&lt;br /&gt;
* used identical configure options except for compiler flags&lt;br /&gt;
&lt;br /&gt;
== Results Summary ==&lt;br /&gt;
&lt;br /&gt;
=== Average Encoding Time (wall clock, seconds) ===&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Video&lt;br /&gt;
! System&lt;br /&gt;
! GCC O2&lt;br /&gt;
! GCC O2 znver1&lt;br /&gt;
! Clang O2&lt;br /&gt;
|-&lt;br /&gt;
| Sintel&lt;br /&gt;
| 554.00&lt;br /&gt;
| 550.68&lt;br /&gt;
| 548.06&lt;br /&gt;
| &#039;&#039;&#039;547.48&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Big Buck Bunny&lt;br /&gt;
| 598.33&lt;br /&gt;
| 595.53&lt;br /&gt;
| 592.65&lt;br /&gt;
| &#039;&#039;&#039;589.88&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
| Tears of Steel&lt;br /&gt;
| 308.67&lt;br /&gt;
| 306.59&lt;br /&gt;
| 306.04&lt;br /&gt;
| &#039;&#039;&#039;305.26&#039;&#039;&#039;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
(Standard deviation across runs was consistently below 1 second.)&lt;br /&gt;
&lt;br /&gt;
== Analysis ==&lt;br /&gt;
&lt;br /&gt;
=== GCC ===&lt;br /&gt;
* Recompiling FFmpeg with &#039;&#039;&#039;GCC -O2&#039;&#039;&#039; provides a small but consistent improvement (~0.5–0.7%).&lt;br /&gt;
* Adding &#039;&#039;&#039;Zen1-specific tuning&#039;&#039;&#039; (&amp;lt;code&amp;gt;-march=znver1&amp;lt;/code&amp;gt;) yields an additional ~0.3–0.5%.&lt;br /&gt;
* &amp;lt;code&amp;gt;-O3&amp;lt;/code&amp;gt; does &#039;&#039;&#039;not&#039;&#039;&#039; consistently improve performance and may slightly degrade results depending on workload.&lt;br /&gt;
&lt;br /&gt;
Best GCC configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
-O2 -march=znver1 -mtune=znver1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang ===&lt;br /&gt;
* &#039;&#039;&#039;Clang -O2&#039;&#039;&#039; outperforms all GCC configurations tested.&lt;br /&gt;
* CPU-specific tuning (&amp;lt;code&amp;gt;znver1&amp;lt;/code&amp;gt;) does not improve results with Clang and can be marginally negative.&lt;br /&gt;
* Memory usage and stability remain comparable to GCC builds.&lt;br /&gt;
&lt;br /&gt;
Best overall configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Clang -O2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Final Conclusion ==&lt;br /&gt;
&lt;br /&gt;
On a first-generation AMD Zen processor (Ryzen 7 1700):&lt;br /&gt;
&lt;br /&gt;
* Recompiling FFmpeg provides measurable but modest gains.&lt;br /&gt;
* The best results in this study were achieved using &#039;&#039;&#039;Clang -O2&#039;&#039;&#039;, with improvements of approximately &#039;&#039;&#039;1–1.4%&#039;&#039;&#039; over the distribution build.&lt;br /&gt;
* GCC benefits slightly from CPU-specific tuning; Clang does not.&lt;br /&gt;
* No configuration produced dramatic gains, as most performance-critical paths in x264 are already hand-optimized in assembly.&lt;br /&gt;
&lt;br /&gt;
=== Practical Recommendation ===&lt;br /&gt;
Recompiling FFmpeg is worthwhile only if:&lt;br /&gt;
* encoding is CPU-bound&lt;br /&gt;
* workloads are frequent or long-running&lt;br /&gt;
* maintaining a custom build is acceptable&lt;br /&gt;
&lt;br /&gt;
For general desktop usage or GPU-based encoding, the system FFmpeg package remains the most practical choice.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=40</id>
		<title>Recompile ffmpeg, is it worth?</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=40"/>
		<updated>2026-02-22T15:10:19Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
&lt;br /&gt;
This article documents a controlled benchmark study aimed at evaluating whether recompiling FFmpeg with different compilers and optimization flags provides measurable performance benefits on a first-generation AMD Ryzen CPU.&lt;br /&gt;
&lt;br /&gt;
The focus is on CPU-bound, real-world video transcoding workloads, not synthetic microbenchmarks.&lt;br /&gt;
All tests were performed on the same machine, using identical inputs and methodology, varying only the compiler and compilation flags.&lt;br /&gt;
&lt;br /&gt;
== Test System Configuration ==&lt;br /&gt;
=== Hardware ===&lt;br /&gt;
* CPU: AMD Ryzen 7 1700 (Zen 1, 8 cores / 16 threads)&lt;br /&gt;
* RAM: 16 GB&lt;br /&gt;
* Storage: SSD&lt;br /&gt;
* Architecture: x86_64&lt;br /&gt;
&lt;br /&gt;
=== Software Environment ===&lt;br /&gt;
* Operating System: Linux Mint 22.3 (Cinnamon, 64-bit)&lt;br /&gt;
* Debian base: trixie/sid&lt;br /&gt;
* Kernel: 6.8.0-90-generic&lt;br /&gt;
* CPU governor: performance (fixed during benchmarks)&lt;br /&gt;
&lt;br /&gt;
=== Toolchain ===&lt;br /&gt;
* FFmpeg (system): 6.1.1-3ubuntu5&lt;br /&gt;
* GCC: 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04)&lt;br /&gt;
* Clang/LLVM: 18.1.3 (Ubuntu build)&lt;br /&gt;
* Assembler:&lt;br /&gt;
** NASM 2.16.01&lt;br /&gt;
** YASM 1.3.0&lt;br /&gt;
* External Libraries&lt;br /&gt;
** x264: built locally from source and linked dynamically (system x264 package not used)&lt;br /&gt;
&lt;br /&gt;
=== Test Material ===&lt;br /&gt;
&lt;br /&gt;
All input files are unmodified Blender Foundation open movies.&lt;br /&gt;
Files were not trimmed, re-encoded, or altered in any way. SHA-256 checksums were recorded to ensure bitwise-identical inputs across all tests.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=39</id>
		<title>Recompile ffmpeg, is it worth?</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Recompile_ffmpeg,_is_it_worth%3F&amp;diff=39"/>
		<updated>2026-02-22T15:03:04Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;todo&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;todo&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Main_Page&amp;diff=38</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Main_Page&amp;diff=38"/>
		<updated>2024-06-18T08:04:41Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Added few articles:&lt;br /&gt;
* [[Create foreign data wrapper (Postgresql)]]&lt;br /&gt;
* [[Automating Wiki Page Creation from a PostgreSQL Table]]&lt;br /&gt;
* [[Control-M job stuck]]&lt;br /&gt;
* [[Backup and restore huge partitioned database - SQL Server]]&lt;br /&gt;
* [[Install a certificate for a website under Apache on Linux]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Create_foreign_data_wrapper_(Postgresql)&amp;diff=37</id>
		<title>Create foreign data wrapper (Postgresql)</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Create_foreign_data_wrapper_(Postgresql)&amp;diff=37"/>
		<updated>2024-06-18T08:04:16Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Category:postgresql In this guide I&amp;#039;m going to show how to read a remote Postgresql database, from Postgresql.  We will work on database &amp;#039;&amp;#039;&amp;#039;A&amp;#039;&amp;#039;&amp;#039; that will access data of database &amp;#039;&amp;#039;&amp;#039;B&amp;#039;&amp;#039;&amp;#039;.  On database &amp;#039;&amp;#039;&amp;#039;A&amp;#039;&amp;#039;&amp;#039; we need to connect with a user with admin privileges, and install the extension: &amp;lt;syntaxhighlight lang=postgresql&amp;gt;CREATE EXTENSION postgres_fdw;&amp;lt;/syntaxhighlight&amp;gt;  Then we create a &amp;#039;&amp;#039;server&amp;#039;&amp;#039; wich is the link to read remote data. &amp;lt;br&amp;gt;Suppose our database &amp;#039;&amp;#039;&amp;#039;B&amp;#039;&amp;#039;&amp;#039;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:postgresql]]&lt;br /&gt;
In this guide I&#039;m going to show how to read a remote Postgresql database, from Postgresql.&lt;br /&gt;
&lt;br /&gt;
We will work on database &#039;&#039;&#039;A&#039;&#039;&#039; that will access data of database &#039;&#039;&#039;B&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
On database &#039;&#039;&#039;A&#039;&#039;&#039; we need to connect with a user with admin privileges, and install the extension:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;CREATE EXTENSION postgres_fdw;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then we create a &#039;&#039;server&#039;&#039; wich is the link to read remote data.&lt;br /&gt;
&amp;lt;br&amp;gt;Suppose our database &#039;&#039;&#039;B&#039;&#039;&#039; is on a server at address &#039;&#039;&#039;192.168.1.22&#039;&#039;&#039; on port &#039;&#039;&#039;5432&#039;&#039;&#039; and the database is called &#039;&#039;&#039;muninn&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;CREATE SERVER muninn_server&lt;br /&gt;
FOREIGN DATA WRAPPER postgres_fdw&lt;br /&gt;
OPTIONS (host &#039;192.168.1.22&#039;, dbname &#039;muninn&#039;, port &#039;5432&#039;);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have also defined the name &#039;&#039;&#039;muninn_server&#039;&#039;&#039; wich is the link to our remote server &#039;&#039;&#039;B&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Now the security. We need to give permission on user on database &#039;&#039;&#039;A&#039;&#039;&#039; to read data from database &#039;&#039;&#039;B&#039;&#039;&#039;. But we need to specify credentials to read from database &#039;&#039;&#039;B&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br&amp;gt;Suppose in database &#039;&#039;&#039;A&#039;&#039;&#039; we whant user &#039;&#039;&#039;odin&#039;&#039;&#039; to read data on database &#039;&#039;&#039;B&#039;&#039;&#039;. On database &#039;&#039;&#039;B&#039;&#039;&#039; we need to specify credentials. Suppose our user on database &#039;&#039;&#039;B&#039;&#039;&#039; is &#039;&#039;&#039;saturn&#039;&#039;&#039; and the password is &#039;&#039;&#039;aPPlep13&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;CREATE USER MAPPING FOR odin&lt;br /&gt;
SERVER muninn_server&lt;br /&gt;
OPTIONS (user &#039;saturn&#039;, password &#039;aPPlep13&#039;);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to grant to user &#039;&#039;&#039;odin&#039;&#039;&#039; usage of foreign server &#039;&#039;&#039;muninn_server&#039;&#039;&#039;:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;GRANT USAGE ON FOREIGN SERVER muninn_server TO odin;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can connect as user &#039;&#039;&#039;odin&#039;&#039;&#039; to database &#039;&#039;&#039;A&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
I strongly suggest to create an appropriate schema where make remote objects visible:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;CREATE SCHEMA rem_muninn;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And now we can grant visibility of remote objects of a specific schema in our new created schema:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;IMPORT FOREIGN SCHEMA remote_schema&lt;br /&gt;
FROM SERVER muninn_server INTO rem_muninn;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now in schema rem_muninn we can query every object in remote schema.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Category:Postgresql&amp;diff=36</id>
		<title>Category:Postgresql</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Category:Postgresql&amp;diff=36"/>
		<updated>2024-06-18T07:43:28Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Pages related to Postgresql.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Pages related to Postgresql.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=35</id>
		<title>Automating Wiki Page Creation from a PostgreSQL Table</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=35"/>
		<updated>2024-06-18T07:43:13Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:mediawiki]]&lt;br /&gt;
[[Category:postgresql]]&lt;br /&gt;
Managing large datasets can be daunting. A neat solution is to use MediaWiki to transform data from a PostgreSQL table into organized wiki pages. This guide walks you through an automated process to achieve this, saving time and ensuring accurate, user-friendly data representation. Ideal for database administrators and developers, this tutorial bridges the gap between raw data and accessible wiki pages, making information management a breeze.&lt;br /&gt;
&lt;br /&gt;
== Initial Data ==&lt;br /&gt;
* In my case, the table is called: public.cities&lt;br /&gt;
* The only field of interest to me is friendly_name&lt;br /&gt;
* The mediawiki template in my case is called city_t and has the city name as a parameter. The parameter is called city_name.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* The name of the page should not just be the city name, but should be composed similarly to: cityname_(city).&lt;br /&gt;
* The parameter passed to the template should instead be just the city name.&lt;br /&gt;
== Execution ==&lt;br /&gt;
Here is the query executed on PostgreSQL:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;&lt;br /&gt;
SELECT xmlelement(name mediawiki,&lt;br /&gt;
  xmlagg(&lt;br /&gt;
    xmlelement(name page,&lt;br /&gt;
      xmlelement(name title, friendly_name || &#039;_(city)&#039;),&lt;br /&gt;
      xmlelement(name revision,&lt;br /&gt;
        xmlelement(name text, &#039;{{city_t|city_name=&#039; || friendly_name || &#039;}}&#039;)&lt;br /&gt;
      )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
)::text&lt;br /&gt;
FROM public.cities&lt;br /&gt;
where active;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: the XML is converted to text because it was more convenient for me with the client I was using (DBeaver) to then copy and paste into a text file.&lt;br /&gt;
&lt;br /&gt;
Having saved the text file with an .xml extension, it was enough to go to the wiki at: &amp;quot;SpecialPages -&amp;gt; Page tools -&amp;gt; Import pages&amp;quot; and upload the XML from there.&lt;br /&gt;
&lt;br /&gt;
Done, hundreds of pages created quickly!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=34</id>
		<title>Automating Wiki Page Creation from a PostgreSQL Table</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=34"/>
		<updated>2023-11-07T14:57:33Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:mediawiki]]&lt;br /&gt;
Managing large datasets can be daunting. A neat solution is to use MediaWiki to transform data from a PostgreSQL table into organized wiki pages. This guide walks you through an automated process to achieve this, saving time and ensuring accurate, user-friendly data representation. Ideal for database administrators and developers, this tutorial bridges the gap between raw data and accessible wiki pages, making information management a breeze.&lt;br /&gt;
&lt;br /&gt;
== Initial Data ==&lt;br /&gt;
* In my case, the table is called: public.cities&lt;br /&gt;
* The only field of interest to me is friendly_name&lt;br /&gt;
* The mediawiki template in my case is called city_t and has the city name as a parameter. The parameter is called city_name.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* The name of the page should not just be the city name, but should be composed similarly to: cityname_(city).&lt;br /&gt;
* The parameter passed to the template should instead be just the city name.&lt;br /&gt;
== Execution ==&lt;br /&gt;
Here is the query executed on PostgreSQL:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;&lt;br /&gt;
SELECT xmlelement(name mediawiki,&lt;br /&gt;
  xmlagg(&lt;br /&gt;
    xmlelement(name page,&lt;br /&gt;
      xmlelement(name title, friendly_name || &#039;_(city)&#039;),&lt;br /&gt;
      xmlelement(name revision,&lt;br /&gt;
        xmlelement(name text, &#039;{{city_t|city_name=&#039; || friendly_name || &#039;}}&#039;)&lt;br /&gt;
      )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
)::text&lt;br /&gt;
FROM public.cities&lt;br /&gt;
where active;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: the XML is converted to text because it was more convenient for me with the client I was using (DBeaver) to then copy and paste into a text file.&lt;br /&gt;
&lt;br /&gt;
Having saved the text file with an .xml extension, it was enough to go to the wiki at: &amp;quot;SpecialPages -&amp;gt; Page tools -&amp;gt; Import pages&amp;quot; and upload the XML from there.&lt;br /&gt;
&lt;br /&gt;
Done, hundreds of pages created quickly!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=33</id>
		<title>Automating Wiki Page Creation from a PostgreSQL Table</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=33"/>
		<updated>2023-11-03T15:54:19Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:mediawiki]]&lt;br /&gt;
Managing large datasets can be daunting. A neat solution is to use MediaWiki to transform data from a PostgreSQL table into organized wiki pages. This guide walks you through an automated process to achieve this, saving time and ensuring accurate, user-friendly data representation. Ideal for database administrators and developers, this tutorial bridges the gap between raw data and accessible wiki pages, making information management a breeze.&lt;br /&gt;
&lt;br /&gt;
== Initial Data ==&lt;br /&gt;
* In my case, the table is called: public.cities&lt;br /&gt;
* The only field of interest to me is friendly_name&lt;br /&gt;
* The mediawiki template in my case is called city_t and has the city name as a parameter. The parameter is called city_name.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* The name of the page should not just be the city name, but should be composed similarly to: cityname_(city).&lt;br /&gt;
* The parameter passed to the template should instead be just the city name.&lt;br /&gt;
== Execution ==&lt;br /&gt;
Here is the query executed on PostgreSQL:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;&lt;br /&gt;
SELECT xmlelement(name mediawiki,&lt;br /&gt;
  xmlagg(&lt;br /&gt;
    xmlelement(name page,&lt;br /&gt;
      xmlelement(name title, friendly_name || &#039;_(city)&#039;,&lt;br /&gt;
      xmlelement(name revision,&lt;br /&gt;
        xmlelement(name text, &#039;{{city_t|city_name=&#039; || friendly_name || &#039;}}&#039;)&lt;br /&gt;
      )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
)::text&lt;br /&gt;
FROM public.cities&lt;br /&gt;
where active;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: the XML is converted to text because it was more convenient for me with the client I was using (DBeaver) to then copy and paste into a text file.&lt;br /&gt;
&lt;br /&gt;
Having saved the text file with an .xml extension, it was enough to go to the wiki at: &amp;quot;SpecialPages -&amp;gt; Page tools -&amp;gt; Import pages&amp;quot; and upload the XML from there.&lt;br /&gt;
&lt;br /&gt;
Done, hundreds of pages created quickly!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Main_Page&amp;diff=32</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Main_Page&amp;diff=32"/>
		<updated>2023-11-03T15:51:46Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Added few articles:&lt;br /&gt;
* [[Automating Wiki Page Creation from a PostgreSQL Table]]&lt;br /&gt;
* [[Control-M job stuck]]&lt;br /&gt;
* [[Backup and restore huge partitioned database - SQL Server]]&lt;br /&gt;
* [[Install a certificate for a website under Apache on Linux]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=31</id>
		<title>Automating Wiki Page Creation from a PostgreSQL Table</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=31"/>
		<updated>2023-11-03T15:51:27Z</updated>

		<summary type="html">&lt;p&gt;Luca: Luca moved page Create numerous wiki pages from a Postgresql table to Automating Wiki Page Creation from a PostgreSQL Table without leaving a redirect: Titolo migliore&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:mediawiki]]&lt;br /&gt;
This is a rather specific requirement.&lt;br /&gt;
In my case, I found myself having a Postgresql table from which I needed to create a page on an internal mediawiki for each row.&lt;br /&gt;
For each row, I also had to insert a call to a template in the body of the page.&lt;br /&gt;
&lt;br /&gt;
== Initial Data ==&lt;br /&gt;
* In my case, the table is called: public.cities&lt;br /&gt;
* The only field of interest to me is friendly_name&lt;br /&gt;
* The mediawiki template in my case is called city_t and has the city name as a parameter. The parameter is called city_name.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* The name of the page should not just be the city name, but should be composed similarly to: cityname_(city).&lt;br /&gt;
* The parameter passed to the template should instead be just the city name.&lt;br /&gt;
== Execution ==&lt;br /&gt;
Here is the query executed on PostgreSQL:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;&lt;br /&gt;
SELECT xmlelement(name mediawiki,&lt;br /&gt;
  xmlagg(&lt;br /&gt;
    xmlelement(name page,&lt;br /&gt;
      xmlelement(name title, friendly_name || &#039;_(city)&#039;,&lt;br /&gt;
      xmlelement(name revision,&lt;br /&gt;
        xmlelement(name text, &#039;{{city_t|city_name=&#039; || friendly_name || &#039;}}&#039;)&lt;br /&gt;
      )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
)::text&lt;br /&gt;
FROM public.cities&lt;br /&gt;
where active;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: the XML is converted to text because it was more convenient for me with the client I was using (DBeaver) to then copy and paste into a text file.&lt;br /&gt;
&lt;br /&gt;
Having saved the text file with an .xml extension, it was enough to go to the wiki at: &amp;quot;SpecialPages -&amp;gt; Page tools -&amp;gt; Import pages&amp;quot; and upload the XML from there.&lt;br /&gt;
&lt;br /&gt;
Done, hundreds of pages created quickly!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Category:Mediawiki&amp;diff=30</id>
		<title>Category:Mediawiki</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Category:Mediawiki&amp;diff=30"/>
		<updated>2023-11-03T15:47:24Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Articles relative at mediawiki itself.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Articles relative at mediawiki itself.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=29</id>
		<title>Automating Wiki Page Creation from a PostgreSQL Table</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Automating_Wiki_Page_Creation_from_a_PostgreSQL_Table&amp;diff=29"/>
		<updated>2023-11-03T15:47:09Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Category:mediawiki This is a rather specific requirement. In my case, I found myself having a Postgresql table from which I needed to create a page on an internal mediawiki for each row. For each row, I also had to insert a call to a template in the body of the page.  == Initial Data == * In my case, the table is called: public.cities * The only field of interest to me is friendly_name * The mediawiki template in my case is called city_t and has the city name as a pa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:mediawiki]]&lt;br /&gt;
This is a rather specific requirement.&lt;br /&gt;
In my case, I found myself having a Postgresql table from which I needed to create a page on an internal mediawiki for each row.&lt;br /&gt;
For each row, I also had to insert a call to a template in the body of the page.&lt;br /&gt;
&lt;br /&gt;
== Initial Data ==&lt;br /&gt;
* In my case, the table is called: public.cities&lt;br /&gt;
* The only field of interest to me is friendly_name&lt;br /&gt;
* The mediawiki template in my case is called city_t and has the city name as a parameter. The parameter is called city_name.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
* The name of the page should not just be the city name, but should be composed similarly to: cityname_(city).&lt;br /&gt;
* The parameter passed to the template should instead be just the city name.&lt;br /&gt;
== Execution ==&lt;br /&gt;
Here is the query executed on PostgreSQL:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=postgresql&amp;gt;&lt;br /&gt;
SELECT xmlelement(name mediawiki,&lt;br /&gt;
  xmlagg(&lt;br /&gt;
    xmlelement(name page,&lt;br /&gt;
      xmlelement(name title, friendly_name || &#039;_(city)&#039;,&lt;br /&gt;
      xmlelement(name revision,&lt;br /&gt;
        xmlelement(name text, &#039;{{city_t|city_name=&#039; || friendly_name || &#039;}}&#039;)&lt;br /&gt;
      )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
)::text&lt;br /&gt;
FROM public.cities&lt;br /&gt;
where active;&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note: the XML is converted to text because it was more convenient for me with the client I was using (DBeaver) to then copy and paste into a text file.&lt;br /&gt;
&lt;br /&gt;
Having saved the text file with an .xml extension, it was enough to go to the wiki at: &amp;quot;SpecialPages -&amp;gt; Page tools -&amp;gt; Import pages&amp;quot; and upload the XML from there.&lt;br /&gt;
&lt;br /&gt;
Done, hundreds of pages created quickly!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Test&amp;diff=28</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Test&amp;diff=28"/>
		<updated>2023-10-18T14:36:13Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is just a test page for testing.&lt;br /&gt;
== Autosection ==&lt;br /&gt;
this section is updated by a bot&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Test&amp;diff=27</id>
		<title>Test</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Test&amp;diff=27"/>
		<updated>2023-10-18T14:35:49Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;This is just a test page for testing.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is just a test page for testing.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=18</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=18"/>
		<updated>2023-02-27T16:31:21Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now suppose we are in year 2023, which is true at the moment I wrote this page, and we know that data before 2023 will never be modified.&lt;br /&gt;
&amp;lt;br&amp;gt;We can freeze that data by setting in READ_ONLY the correct filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FGVeryOldStuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2020_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2021_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2022_Stuff READ_ONLY;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an application tries to write on a frozen filegroup, like this query:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180201 12:30&#039;, &#039;Lunch with coworker&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It gets an error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Msg 652, Level 16, State 1, Line 2&lt;br /&gt;
The index &amp;quot;PK_big_one&amp;quot; for table &amp;quot;dbo.big_one&amp;quot; (RowsetId 72057594041597952) resides on a read-only filegroup (&amp;quot;FGVeryOldStuff&amp;quot;), which cannot be modified.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And this is good, because we want to keep our data consistent.&lt;br /&gt;
&amp;lt;br&amp;gt;But anyone can still write on fresh data. Like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230201 12:30&#039;, &#039;Lunch at &amp;quot;La quercia&amp;quot; reastaurant&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And we get no problem: (1 row affected).&lt;br /&gt;
&lt;br /&gt;
Now the backup.&lt;br /&gt;
First we make a backup for the readonly filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FGVeryOldStuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2020_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2021_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2022_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
OK, now a full backup.&lt;br /&gt;
&amp;lt;br&amp;gt;But do we need it really full? No, because we already have a backup of frozen filegroups!&lt;br /&gt;
&amp;lt;br&amp;gt;The key, in the following script is &#039;&#039;&#039;READ_WRITE_FILEGROUPS&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] READ_WRITE_FILEGROUPS&lt;br /&gt;
TO DISK = &#039;B:\BACKUP\HistoryDemo_partial.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look at the output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Processed 432 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo&#039; on file 1.&lt;br /&gt;
Processed 24 pages for database &#039;HistoryDemo&#039;, file &#039;2023_Stuff&#039; on file 1.&lt;br /&gt;
Processed 2 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo_log&#039; on file 1.&lt;br /&gt;
BACKUP DATABASE...FILE=&amp;lt;name&amp;gt; successfully processed 458 pages in 0.031 seconds (115.218 MB/sec).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It skipped all the readonly filegroups!&lt;br /&gt;
&lt;br /&gt;
Now the restore. For this example we suppose, and tested, to go on a different server, just to simulate the loss of our source server. So we have different paths.&lt;br /&gt;
&amp;lt;br&amp;gt;First we restore backup of writable filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039; WITH  PARTIAL, RECOVERY,&lt;br /&gt;
MOVE N&#039;HistoryDemo&#039; TO N&#039;K:\Data\HistoryDemo.mdf&#039;,&lt;br /&gt;
MOVE N&#039;HistoryDemo_log&#039; TO N&#039;L:\Log\HistoryDemo.ldf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then we restore the frozen filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;OldStuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;OldStuff&#039; TO N&#039;K:\Data\OldStuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2020_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2020_Stuff&#039; TO N&#039;K:\Data\2020_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2021_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2021_Stuff&#039; TO N&#039;K:\Data\2021_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2022_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2022_Stuff&#039; TO N&#039;K:\Data\2022_Stuff.ndf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And then we restore the 2023 stuff:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FILEGROUP = &#039;FG_2023_Stuff&#039;&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039;&lt;br /&gt;
WITH RECOVERY,&lt;br /&gt;
MOVE N&#039;2023_Stuff&#039; TO N&#039;K:\Data\2023_Stuff.ndf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Done! It works!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Main_Page&amp;diff=17</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Main_Page&amp;diff=17"/>
		<updated>2023-02-27T16:16:04Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Added a couple of artilces:&lt;br /&gt;
* [[Control-M job stuck]]&lt;br /&gt;
* [[Backup and restore huge partitioned database - SQL Server]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=16</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=16"/>
		<updated>2023-02-27T16:13:43Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now suppose we are in year 2023, which is true at the moment I wrote this page, and we know that data before 2023 will never be modified.&lt;br /&gt;
&amp;lt;br&amp;gt;We can freeze that data by setting in READ_ONLY the correct filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FGVeryOldStuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2020_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2021_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2022_Stuff READ_ONLY;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an application tries to write on a frozen filegroup, like this query:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180201 12:30&#039;, &#039;Lunch with coworker&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It gets an error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Msg 652, Level 16, State 1, Line 2&lt;br /&gt;
The index &amp;quot;PK_big_one&amp;quot; for table &amp;quot;dbo.big_one&amp;quot; (RowsetId 72057594041597952) resides on a read-only filegroup (&amp;quot;FGVeryOldStuff&amp;quot;), which cannot be modified.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And this is good, because we want to keep our data consistent.&lt;br /&gt;
&amp;lt;br&amp;gt;But anyone can still write on fresh data. Like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230201 12:30&#039;, &#039;Lunch at &amp;quot;La quercia&amp;quot; reastaurant&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And we get no problem: (1 row affected).&lt;br /&gt;
&lt;br /&gt;
Now the backup.&lt;br /&gt;
First we make a backup for the readonly filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FGVeryOldStuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2020_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2021_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2022_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
OK, now a full backup.&lt;br /&gt;
&amp;lt;br&amp;gt;But do we need it really full? No, because we already have a backup of frozen filegroups!&lt;br /&gt;
&amp;lt;br&amp;gt;The key, in the following script is &#039;&#039;&#039;READ_WRITE_FILEGROUPS&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] READ_WRITE_FILEGROUPS&lt;br /&gt;
TO DISK = &#039;B:\BACKUP\HistoryDemo_partial.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look at the output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Processed 432 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo&#039; on file 1.&lt;br /&gt;
Processed 24 pages for database &#039;HistoryDemo&#039;, file &#039;2023_Stuff&#039; on file 1.&lt;br /&gt;
Processed 2 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo_log&#039; on file 1.&lt;br /&gt;
BACKUP DATABASE...FILE=&amp;lt;name&amp;gt; successfully processed 458 pages in 0.031 seconds (115.218 MB/sec).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It skipped all the readonly filegroups!&lt;br /&gt;
&lt;br /&gt;
Now the restore. For this example we suppose, and tested, to go on a different server, just to simulate the loss of our source server. So we have different paths.&lt;br /&gt;
&amp;lt;br&amp;gt;First we restore backup of writable filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039; WITH  PARTIAL, RECOVERY,&lt;br /&gt;
MOVE N&#039;HistoryDemo&#039; TO N&#039;K:\Data\HistoryDemo.mdf&#039;,&lt;br /&gt;
MOVE N&#039;HistoryDemo_log&#039; TO N&#039;L:\Log\HistoryDemo.ldf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then we restore the frozen filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;OldStuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;OldStuff&#039; TO N&#039;K:\Data\OldStuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2020_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2020_Stuff&#039; TO N&#039;K:\Data\2020_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2021_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2021_Stuff&#039; TO N&#039;K:\Data\2021_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2022_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2022_Stuff&#039; TO N&#039;K:\Data\2022_Stuff.ndf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And then we restore the 2023 stuff:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FILEGROUP = &#039;FG_2023_Stuff&#039;&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039;&lt;br /&gt;
WITH RECOVERY,&lt;br /&gt;
MOVE N&#039;2023_Stuff&#039; TO N&#039;K:\Data\2023_Stuff.ndf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=15</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=15"/>
		<updated>2023-02-27T16:07:39Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now suppose we are in year 2023, which is true at the moment I wrote this page, and we know that data before 2023 will never be modified.&lt;br /&gt;
&amp;lt;br&amp;gt;We can freeze that data by setting in READ_ONLY the correct filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FGVeryOldStuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2020_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2021_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2022_Stuff READ_ONLY;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an application tries to write on a frozen filegroup, like this query:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180201 12:30&#039;, &#039;Lunch with coworker&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It gets an error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Msg 652, Level 16, State 1, Line 2&lt;br /&gt;
The index &amp;quot;PK_big_one&amp;quot; for table &amp;quot;dbo.big_one&amp;quot; (RowsetId 72057594041597952) resides on a read-only filegroup (&amp;quot;FGVeryOldStuff&amp;quot;), which cannot be modified.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And this is good, because we want to keep our data consistent.&lt;br /&gt;
&amp;lt;br&amp;gt;But anyone can still write on fresh data. Like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230201 12:30&#039;, &#039;Lunch at &amp;quot;La quercia&amp;quot; reastaurant&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And we get no problem: (1 row affected).&lt;br /&gt;
&lt;br /&gt;
Now the backup.&lt;br /&gt;
First we make a backup for the readonly filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FGVeryOldStuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2020_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2021_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2022_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
OK, now a full backup.&lt;br /&gt;
&amp;lt;br&amp;gt;But do we need it really full? No, because we already have a backup of frozen filegroups!&lt;br /&gt;
&amp;lt;br&amp;gt;The key, in the following script is &#039;&#039;&#039;READ_WRITE_FILEGROUPS&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] READ_WRITE_FILEGROUPS&lt;br /&gt;
TO DISK = &#039;B:\BACKUP\HistoryDemo_partial.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look at the output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Processed 432 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo&#039; on file 1.&lt;br /&gt;
Processed 24 pages for database &#039;HistoryDemo&#039;, file &#039;2023_Stuff&#039; on file 1.&lt;br /&gt;
Processed 2 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo_log&#039; on file 1.&lt;br /&gt;
BACKUP DATABASE...FILE=&amp;lt;name&amp;gt; successfully processed 458 pages in 0.031 seconds (115.218 MB/sec).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It skipped all the readonly filegroups!&lt;br /&gt;
&lt;br /&gt;
Now the restore. For this example we suppose, and tested, to go on a different server, just to simulate the loss of our source server. So we have different paths.&lt;br /&gt;
&amp;lt;br&amp;gt;First we restore backup of writable filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039; WITH  PARTIAL, RECOVERY,&lt;br /&gt;
MOVE N&#039;HistoryDemo&#039; TO N&#039;K:\Data\HistoryDemo.mdf&#039;,&lt;br /&gt;
MOVE N&#039;2023_Stuff&#039; TO N&#039;K:\Data\2023_Stuff.ndf&#039;,&lt;br /&gt;
MOVE N&#039;HistoryDemo_log&#039; TO N&#039;L:\Log\HistoryDemo.ldf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then we restore the frozen filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;OldStuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;OldStuff&#039; TO N&#039;K:\Data\OldStuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2020_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2020_Stuff&#039; TO N&#039;K:\Data\2020_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2021_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2021_Stuff&#039; TO N&#039;K:\Data\2021_Stuff.ndf&#039;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo] FILE = N&#039;2022_Stuff&#039; FROM DISK = N&#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;&lt;br /&gt;
WITH FILE  = 1, MOVE N&#039;2022_Stuff&#039; TO N&#039;K:\Data\2022_Stuff.ndf&#039;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And then we restore the 2023 stuff:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
RESTORE DATABASE [HistoryDemo]&lt;br /&gt;
FILEGROUP = &#039;FG_2023_Stuff&#039;&lt;br /&gt;
FROM DISK = N&#039;B:\BACKUP\HistoryDemo_partial.bck&#039;&lt;br /&gt;
WITH RECOVERY&lt;br /&gt;
GO&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=14</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=14"/>
		<updated>2023-02-27T15:13:25Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now suppose we are in year 2023, which is true at the moment I wrote this page, and we know that data before 2023 will never be modified.&lt;br /&gt;
&amp;lt;br&amp;gt;We can freeze that data by setting in READ_ONLY the correct filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FGVeryOldStuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2020_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2021_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2022_Stuff READ_ONLY;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an application tries to write on a frozen filegroup, like this query:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180201 12:30&#039;, &#039;Lunch with coworker&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It gets an error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Msg 652, Level 16, State 1, Line 2&lt;br /&gt;
The index &amp;quot;PK_big_one&amp;quot; for table &amp;quot;dbo.big_one&amp;quot; (RowsetId 72057594041597952) resides on a read-only filegroup (&amp;quot;FGVeryOldStuff&amp;quot;), which cannot be modified.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And this is good, because we want to keep our data consistent.&lt;br /&gt;
&amp;lt;br&amp;gt;But anyone can still write on fresh data. Like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230201 12:30&#039;, &#039;Lunch at &amp;quot;La quercia&amp;quot; reastaurant&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And we get no problem: (1 row affected).&lt;br /&gt;
&lt;br /&gt;
Now the backup.&lt;br /&gt;
First we make a backup for the readonly filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FGVeryOldStuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2020_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2021_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2022_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
OK, now a full backup.&lt;br /&gt;
&amp;lt;br&amp;gt;But do we need it really full? No, because we already have a backup of frozen filegroups!&lt;br /&gt;
&amp;lt;br&amp;gt;The key, in the following script is &#039;&#039;&#039;READ_WRITE_FILEGROUPS&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] READ_WRITE_FILEGROUPS&lt;br /&gt;
TO DISK = &#039;B:\BACKUP\HistoryDemo_partial.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Look at the output:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Processed 432 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo&#039; on file 1.&lt;br /&gt;
Processed 24 pages for database &#039;HistoryDemo&#039;, file &#039;2023_Stuff&#039; on file 1.&lt;br /&gt;
Processed 2 pages for database &#039;HistoryDemo&#039;, file &#039;HistoryDemo_log&#039; on file 1.&lt;br /&gt;
BACKUP DATABASE...FILE=&amp;lt;name&amp;gt; successfully processed 458 pages in 0.031 seconds (115.218 MB/sec).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It skipped all the readonly filegroups!&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=13</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=13"/>
		<updated>2023-02-27T15:09:28Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
== Demo database creation ==&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now suppose we are in year 2023, which is true at the moment I wrote this page, and we know that data before 2023 will never be modified.&lt;br /&gt;
&amp;lt;br&amp;gt;We can freeze that data by setting in READ_ONLY the correct filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE master;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FGVeryOldStuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2020_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2021_Stuff READ_ONLY;&lt;br /&gt;
ALTER DATABASE HistoryDemo MODIFY FILEGROUP FG_2022_Stuff READ_ONLY;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If an application tries to write on a frozen filegroup, like this query:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180201 12:30&#039;, &#039;Lunch with coworker&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It gets an error:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Msg 652, Level 16, State 1, Line 2&lt;br /&gt;
The index &amp;quot;PK_big_one&amp;quot; for table &amp;quot;dbo.big_one&amp;quot; (RowsetId 72057594041597952) resides on a read-only filegroup (&amp;quot;FGVeryOldStuff&amp;quot;), which cannot be modified.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And this is good, because we want to keep our data consistent.&lt;br /&gt;
&amp;lt;br&amp;gt;But anyone can still write on fresh data. Like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE HistoryDemo;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230201 12:30&#039;, &#039;Lunch at &amp;quot;La quercia&amp;quot; reastaurant&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
And we get no problem: (1 row affected).&lt;br /&gt;
&lt;br /&gt;
Now the backup.&lt;br /&gt;
First we make a backup for the readonly filegroups:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FGVeryOldStuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FGVeryOldStuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2020_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2020_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2021_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2021_Stuff.bck&#039;;&lt;br /&gt;
BACKUP DATABASE [HistoryDemo] FILEGROUP = &#039;FG_2022_Stuff&#039; TO DISK = &#039;B:\BACKUP\HistoryDemo_FG_2022_Stuff.bck&#039;;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=12</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=12"/>
		<updated>2023-02-27T14:58:49Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
== Demo database creation ==&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We alse create a table on PRIMARY filegroup, just to have some data on in:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE TABLE dbo.primary_table&lt;br /&gt;
	(&lt;br /&gt;
	id int PRIMARY KEY IDENTITY (1, 1),&lt;br /&gt;
	some_text nvarchar(50) NULL&lt;br /&gt;
	)&lt;br /&gt;
ON [PRIMARY];&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And we populate it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Some random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Another random text&#039;);&lt;br /&gt;
INSERT INTO primary_table (some_text) VALUES (&#039;Klaatu barada nikto&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=11</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=11"/>
		<updated>2023-02-27T14:57:25Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
== Demo database creation ==&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We need to put data in these files. So we need to prepare a partition function. See [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-function-transact-sql?view=sql-server-ver16 Official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION FUNCTION PFStuff (datetime)&lt;br /&gt;
AS RANGE RIGHT FOR VALUES (&#039;20200101&#039;, &#039;20210101&#039;, &#039;20220101&#039;, &#039;20230101&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we need to map this function to a partition scheme. Also see [https://learn.microsoft.com/en-us/sql/t-sql/statements/create-partition-scheme-transact-sql?view=sql-server-ver16 official documentation].&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
CREATE PARTITION SCHEME PSStuff  &lt;br /&gt;
AS PARTITION PFStuff TO (FGVeryOldStuff, FG_2020_Stuff, FG_2021_Stuff, FG_2022_Stuff, FG_2023_Stuff);  &lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can see scheme and function in SSMS:&lt;br /&gt;
[[image:Partition_scheme_and_function_sql_server_ssms.png|none]]&lt;br /&gt;
&lt;br /&gt;
Now we create a simple table:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
USE [HistoryDemo];&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE dbo.big_one&lt;br /&gt;
(&lt;br /&gt;
	id int NOT NULL IDENTITY (1, 1),&lt;br /&gt;
	event_time datetime NOT NULL,&lt;br /&gt;
	some_text nchar(2000) NULL&lt;br /&gt;
)&lt;br /&gt;
ON PSStuff(event_time);&lt;br /&gt;
ALTER TABLE dbo.big_one&lt;br /&gt;
   ADD CONSTRAINT PK_big_one PRIMARY KEY CLUSTERED (id, event_time);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is not nice, loot at primary key. Not elegant, but not a problem for a demo. It is important to notice this line:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;ON PSStuff(event_time);&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
It means that table will relay on partion scheme &#039;&#039;&#039;PSStuff&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Populate table &#039;&#039;&#039;big_one&#039;&#039;&#039; with some data, in order to have few data in every partition. This is not necessary, we just want to see our data restored!&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 12:30&#039;, &#039;Good lunch with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20180501 20:30&#039;, &#039;Good dinner with Emma&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200401 06:00&#039;, &#039;Epic prank to Dimitri&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20200402 20:30&#039;, &#039;Dinner offered to Dimitri to apologize&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20211231 23:50&#039;, &#039;This year was boring&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20220915 16:00&#039;, &#039;Had a tea with boss&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230101 13:00&#039;, &#039;Lunch with family. Really???&#039;);&lt;br /&gt;
INSERT INTO big_one (event_time, some_text) VALUES (&#039;20230701 00:11&#039;, &#039;Reading future&#039;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=File:Partition_scheme_and_function_sql_server_ssms.png&amp;diff=10</id>
		<title>File:Partition scheme and function sql server ssms.png</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=File:Partition_scheme_and_function_sql_server_ssms.png&amp;diff=10"/>
		<updated>2023-02-27T14:52:22Z</updated>

		<summary type="html">&lt;p&gt;Luca: Category:SQL Server&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
[[Category:SQL Server]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Category:SQL_Server&amp;diff=9</id>
		<title>Category:SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Category:SQL_Server&amp;diff=9"/>
		<updated>2023-02-27T14:22:14Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Articles about Microsoft SQL Server.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Articles about Microsoft SQL Server.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=8</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=8"/>
		<updated>2023-02-27T14:21:58Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
== Demo database creation ==&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This filegroups are still empty! We need to add datafiles. At least one for each filegroup. Let&#039;s do it:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;OldStuff&#039;, FILENAME = N&#039;S:\Data\OldStuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2020_Stuff&#039;, FILENAME = N&#039;S:\Data\2020_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2021_Stuff&#039;, FILENAME = N&#039;S:\Data\2021_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2022_Stuff&#039;, FILENAME = N&#039;S:\Data\2022_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo]&lt;br /&gt;
ADD FILE (NAME = N&#039;2023_Stuff&#039;, FILENAME = N&#039;S:\Data\2023_Stuff.ndf&#039;, SIZE = 512KB) TO FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=7</id>
		<title>Backup and restore huge partitioned database - SQL Server</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Backup_and_restore_huge_partitioned_database_-_SQL_Server&amp;diff=7"/>
		<updated>2023-02-27T14:20:08Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Category:SQL Server This example fits the case when you have a big database in SQL Server where: * one ore more table are partitioned * one or more filegroup are in readonly, for example filegroups with old data  Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.  In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:SQL Server]]&lt;br /&gt;
This example fits the case when you have a big database in SQL Server where:&lt;br /&gt;
* one ore more table are partitioned&lt;br /&gt;
* one or more filegroup are in readonly, for example filegroups with old data&lt;br /&gt;
&lt;br /&gt;
Suggestion: when you have a big database with historical data, it is a good idea to design it similar to this example.&lt;br /&gt;
&lt;br /&gt;
In this case you can backup and restore filegroups separately. If you are doing full backups of this kind of database you are wasting time, space and power. You are wasting money.&lt;br /&gt;
&lt;br /&gt;
== Demo database creation ==&lt;br /&gt;
We are going to create a simple database:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
CREATE DATABASE [HistoryDemo]&lt;br /&gt;
ON  PRIMARY &lt;br /&gt;
(NAME = N&#039;HistoryDemo&#039;, FILENAME = N&#039;S:\Data\HistoryDemo.mdf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
LOG ON &lt;br /&gt;
(NAME = N&#039;HistoryDemo_log&#039;, FILENAME = N&#039;L:\Log\HistoryDemo_log.ldf&#039; , SIZE = 8192KB , FILEGROWTH = 65536KB )&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we add filegroups where partitioned data will relay:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;tsql&amp;quot;&amp;gt;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FGVeryOldStuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2020_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2021_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2022_Stuff;&lt;br /&gt;
ALTER DATABASE [HistoryDemo] ADD FILEGROUP FG_2023_Stuff;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Main_Page&amp;diff=6</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Main_Page&amp;diff=6"/>
		<updated>2023-02-01T16:47:12Z</updated>

		<summary type="html">&lt;p&gt;Luca: Replaced content with &amp;quot;Added first article about Control-M: * Control-M job stuck&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Added first article about Control-M:&lt;br /&gt;
* [[Control-M job stuck]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Category:Control-M&amp;diff=5</id>
		<title>Category:Control-M</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Category:Control-M&amp;diff=5"/>
		<updated>2023-02-01T16:46:40Z</updated>

		<summary type="html">&lt;p&gt;Luca: Created page with &amp;quot;Pages about Control-M.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Pages about Control-M.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Control-M_job_stuck&amp;diff=4</id>
		<title>Control-M job stuck</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Control-M_job_stuck&amp;diff=4"/>
		<updated>2023-02-01T16:46:24Z</updated>

		<summary type="html">&lt;p&gt;Luca: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Control-M]]&lt;br /&gt;
I wrote this article following my experience with Control-M. I&#039;m not really an expert, and this article is relative to Control-M version 9.0.19.&lt;br /&gt;
== What do I mean as job stuck? ==&lt;br /&gt;
Stuck jobs refer to Control-M jobs that remain in the schedule for days without a underlying process. There are several cases, they will be documented one by one as they arise.&lt;br /&gt;
== Job in hold in Wait Condition ==&lt;br /&gt;
This is the seemingly simplest case. A job on hold can be deleted, and the problem is solved.&lt;br /&gt;
== Job in running ==&lt;br /&gt;
If the job has remained in running, a &#039;&#039;&#039;kill&#039;&#039;&#039; probably will not work. There is therefore a script to end a job of this type on the server side.&lt;br /&gt;
&amp;lt;br&amp;gt;To do this, you need to log in to the active Control-M server and run the appropriate script.&lt;br /&gt;
&amp;lt;br&amp;gt;By selecting the job in question, the Order ID must be identified, which is the unique identifier to act on the job that we find in monitoring. This information can be found in the &#039;Run Information&#039; section. For example, in the following case:&lt;br /&gt;
&amp;lt;br&amp;gt;[[Image:Control-m-orderid.png|none]]&lt;br /&gt;
The Order ID is &#039;&#039;&#039;14nfw&#039;&#039;&#039;, and this notation works well for our purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;To run the interested script, open a command prompt on the server as &#039;&#039;&#039;administrator&#039;&#039;&#039;. In this specific case, you need to adjust the following command to your path:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;winbatch&amp;quot;&amp;gt;&amp;quot;C:\Program Files\BMC Software\Control-M Server\bmcperl\perl&amp;quot; &amp;quot;C:\Program Files\BMC Software\Control-M Server\ctm_server\scripts\force_end_job.pl&amp;quot; 14nfw&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Interactively, confirmation will be requested, or an error will be returned.&lt;br /&gt;
&amp;lt;br&amp;gt;Attention: ending the job in this way is equivalent to a kill, so a Control-M alert will be generated to be managed.&lt;br /&gt;
&lt;br /&gt;
== Job in running in hold ==&lt;br /&gt;
End the job as in the previous case. Additionally, the job must be freed (free).&lt;br /&gt;
&lt;br /&gt;
== Cyclic job running in hold ==&lt;br /&gt;
It&#039;s the most annoying case, because in many cases if we were to simply do a free, the next execution would start. In this case, before proceeding with the steps from the previous case, the job schedule needs to be modified by limiting the execution interval to a time range that does not include the period during which we are operating.&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=File:Control-m-orderid.png&amp;diff=3</id>
		<title>File:Control-m-orderid.png</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=File:Control-m-orderid.png&amp;diff=3"/>
		<updated>2023-02-01T16:32:11Z</updated>

		<summary type="html">&lt;p&gt;Luca: Category:Control-M&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
[[Category:Control-M]]&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
	<entry>
		<id>http://tech.muninn.land/index.php?title=Control-M_job_stuck&amp;diff=2</id>
		<title>Control-M job stuck</title>
		<link rel="alternate" type="text/html" href="http://tech.muninn.land/index.php?title=Control-M_job_stuck&amp;diff=2"/>
		<updated>2023-02-01T16:22:24Z</updated>

		<summary type="html">&lt;p&gt;Luca: gestiscoIdati8&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Control-M]&lt;br /&gt;
I wrote this article following my experience with Control-M. I&#039;m not really an expert, and this article is relative to Control-M; version lucazeo&lt;/div&gt;</summary>
		<author><name>Luca</name></author>
	</entry>
</feed>