Linux

Tuning the JVM – G1GC Garbage Collector Flags for Minecraft

After many weeks of studying the JVM, Flags, and testing various combinations, I have come up with these flags as the most ideal combination to use, backed with SCIENCE.

I strongly suggest these flags to start your server (Which BTW: You really should be using Paper instead of Spigot: https://paper.emc.gs – Paper is a drop in replacement for Spigot and all plugins should still work – but with paper, please do not ask for support in #spigot IRC channel – ask in #paper )

Use these flags exactly (only modify the Xmx and Xms) for max memory of 10GB and LOWER. These flags work and scale accordingly to any size of memory, even 500MB)

These flags help keep your server running CONSISTENT without any large spikes. CPU may be slightly higher, but your server will be overall more reliable and stable TPS.

If you are running with 10GB or less memory for MC, you should not adjust these parameters. (I use 10GB myself)

If you for sure need more than 10GB (Hopefully you are 150+ player server) use these changes:

    • -XX:G1MaxNewSizePercent=60
    • -XX:G1NewSizePercent=35
    • -XX:InitiatingHeapOccupancyPercent=15

Explanation of flags:

  1. -Xms matching -Xmx – Why: You should never run your server with the case that -Xmx can run the system completely out of memory. Your server should always be expected to use the entire -Xmx! You should then ensure the OS has extra memory on top of that Xmx for non MC/OS level things. Therefore, you should never run MC with -Xmx settings you can’t support if java uses it all.Now, that means if -Xms is lower than -Xmx – YOU HAVE UNUSED MEMORY! Unused memory is wasted memory. G1 (and probably even CMS to a certain threshhold, but I’m only stating what I’m sure about) operates better with the more memory its given. G1 adaptively chooses how much memory to give to each region to optimize pause time. If you have more memory than it needs to reach an optimal pause time, G1 will simply push that extra into the old generation and it will not hurt you (This may not be the case for CMS, but is the case for G1)

    The fundamental idea of improving GC behavior is to ensure short lived objects die young and never get promoted. With the more memory G1 has, the better assurance you will get that objects are not getting prematurely promoted to the old generation.

    G1 Operates differently than previous collectors and is able to handle larger heaps more efficiently. If it does not need the memory given to it, it will not use it. The entire engine operates differently and does not suffer from too large of heaps.

  2. UnlockExperimentalVMOptions – needed for some of the others specified
  3. TargetSurvivorRatio: I’m sure your all use to seeing this one suggested. Good news! It’s actually a good flag to use :DThis setting controls how much of the Survivor space is ABLE to be used before promotion. If survivor gets too full, stuff starts promoting to Old Gen. The reason behind this is to be able to handle memory allocation spikes.However, MC allocation rate for most part is pretty steady (steadily high…..), and when its steady its safe to raise this value to avoid premature promotions.
  4. G1NewSize Percent: These are the important ones. In CMS and other Generations, tweaking the New Generation results in FIXED SIZE New Gen and usually is done through explicit size setting with -Xmn.With G1, things are better! You now can specify percentages of an overall desired range for the new generation.

    With these settings, we tell G1 to not use its default 5% for new gen, and instead give it 50% at least!

    Minecraft has an extremely high a memory allocation rate, ranging to at least 800 Megabytes a second on a 30 player server! And this is mostly short lived objects (BlockPosition)

    now, this means MC REALLY needs more focus on New Generation to be able to even support this allocation rate. If your new gen is too small, you will be running new gen collections 1-2+ times per second!!!

    This is bad! You will have so many pauses that TPS has risk of suffering, and Spigot might be unable to catch up TPS with the cost of GC’s.

    Then combine the fact that objects will now promote faster, resulting in your Old Gen growing faster…. This is bad and needs to be avoided.

    Given more NewGen, we are able to slow down the intervals of Young Gen collections, resulting in more time for short lived objects to die young and overall more effecient GC behavior.

    if you run with larger heaps (15GB+), you may want to lower the minimum to say 30%, but don’t go lower than 30%. This will let G1 have more power in its own assumptions.

  5. InitiatingHeapOccupancyPercent/G1MixedGCLiveThresholdPercent: Controls when to include Mixed GC’s in the Young GC collection, keeping OldGen tidy without doing a normal OldGen GC collection.On larger heaps(10GB+), you can raise InitiatingHeap to around 20 to reduce CPU usage, but I wouldn’t go higher than that. And you also need to REDUCE the Maximum New Percentage to around 60. If you use 80% NewGen Max, you must keep this at 10 then. It doesn’t hurt to leave it at 10, but “effeciency wise” you can improve it to 20 if you reduce your new gen size, but if you start seeing Old Gen GC, lower it back.
  6. AlwaysPreTouch: AlwaysPreTouch gets the memory setup and reserved at process start ensuring it is contiguous, improving the efficiency of it more.

Also for Large Pages – IT’s even more important to use -Xms = -Xmx! Large Pages needs to have all of the memory specified for it or you could end up without the gains. That memory CAN NOT be used by the OS anyways, so let something use it!
Additionally use these flags (Metaspace is Java8 Only, don’t use it for Java7):

Code:
 -XX:+UseLargePagesInMetaspace

Thanks to https://product.hubspot.com/blog/g1gc-fundamentals-lessons-from-taming-garbage-collection for helping reinforce my understanding of the flags and introduce improvements!

————-
Update 5/24/2018: Added -XX:+ParallelRefProcEnabled

0

Apache Macros – Simplify your config

Using Apache Macros

Many people host small time hobby websites or even websites for family members, friends and clients on a single server. This will lead to quite a lot of repetition for the same apache site definitions over and over again. Thankfully Apache Macros mod will solve many of these issues. This mod will let you create config templates, that can then be re-used over multiple sections of code, allowing you to pass in variables to fill in on use.

Lets get it installed!

sudo apt-get install libapache2-mod-macro
sudo a2enmod macro

Now, you can start using Macros in your site definitions, to replace common configurations.

» Official Documentation for Apache Macros

Examples of Apache Macros

In this you can see a pretty simple Domain macro. This will set the ServerName, and sets a www. alias. Now we look at the Site macro. In this example you will see it calling other macros for Log, GrantAccess and ForceDomain.

To use this, one could simply add inside of the <VirtualHost> this line:

Use Site mysite.com

And then accessing mysite.com would redirect to www.mysite.com, and log to /var/log/apache2/sites/mysite.com_access.log. Needless to say that likely cuts out 99% of the configuration you’re doing for a simple wordpress site you host for a relative.

And since the Apache Macros are parsed at config load, there’s no impact to your servers performance for using it!

For enterprise grade setups, you’re likely already using Puppet to get the same benefits and only running 1 product per server any-ways, but for those of us kicking it hobby level, Apache macros helps quits a bit! Enjoy 🙂

0

Filtering Spam before Forwarding Email with Postfix/SpamAssassin

One feature many cPanel/Shared Webhosts has is an option to forward your email to a different address. Very useful if you want to have multiple email addresses but check it all in one place (Gmail) like I do. But if you’re like me, you’ve likely migrated onto your own dedicated server you manage yourself, and its likely your making mistakes with email forwarding and filtering spam!

The problem is that that when you receive spam, you are also forwarding spam to your email provider, which makes them upset with you and tarnishes your servers IP address. I did this for years! I always thought that Gmail would be smart enough to see the path in the headers to realize it was forwarded – but then thinking about it – why would Gmail trust me that those servers actually sent the email and that I didn’t just spoof those Received: lines to blame someone else?

When I recently migrated my host, I put in a lot more effort into filtering the spam before it even hits Gmail, and learned quite a few things.

Filtering Spam with Postfix

First Off: Initial Connection Client Checks – These stop a majority of the spammers, and its so simple!
Add this line to your /etc/postfix/main.cf:

smtpd_client_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination reject_rbl_client zen.spamhaus.org reject_rbl_client bl.spamcop.net reject_rbl_client cbl.abuseat.org reject_unknown_client permit

This will enforce a lot of restrictions on the client, namely the Zen Spamhaus check, which knocks out so many spammer connections!

Filtering Spam with SpamAssassin

If you haven’t already installed SpamAssassin, do so now. There is a bit to this than I want to put into this post, so follow this sites guide: http://plecko.com.hr/?p=389

His instructions look spot on to me. Key thing I did not do on my setup and I just realized I needed to do: enable CRON=1! I’ve been running with stale SA Rules… But his guide covers it!

Next up is this page: http://wiki.apache.org/spamassassin/ImproveAccuracy

One thing it mentions is missing Perl Modules that SpamAssassin can try to use. For me, I had to run these commands to get them all installed.

sudo apt-get install libgeoip-dev
sudo cpan Geo::IP Mail::DKIM Encode::Detect DBI IO::Socket::IP Digest::SHA1 Net::Patricia

I don’t know what some of them are for, but SpamAssassin is obviously trying to use them, so give them to it!

Passing SPF Checks

Then there is SRS Rewriting. One problem with forwarding email is that it makes every one of your emails now fail SPF checks, because it looks like your server is sending mail for InsertBigNameDomain.com which does not authorize you to send mail on their behalf.

SPF is considered a “broken” implementation, and it is preferred that system admins use DKIM instead as a way to verify authenticity of an email, so ideally you need to rewrite the return path to be your own server name instead.

I used this guide: https://www.mind-it.info/forward-postfix-spf-srs/
Which summarizes down to

sudo apt-get install cmake sysv-rc-conf
cd /usr/local/src/
wget https://github.com/roehling/postsrsd/archive/master.zip
unzip master
cd postsrsd-master/
make
sudo make install
sudo postconf -e "sender_canonical_maps = tcp:127.0.0.1:10001"
sudo postconf -e "sender_canonical_classes = envelope_sender"
sudo postconf -e "recipient_canonical_maps = tcp:127.0.0.1:10002"
sudo postconf -e "recipient_canonical_classes = envelope_recipient"
sudo sysv-rc-conf postsrsd on
sudo service postsrsd restart
sudo service postfix reload

Now when you inspect a received emails header, you will see that the ReturnPath is now something like  <SRS0+9CLa=52=paypal.com=service@starlis.com>
And your SPF will now pass (You do have SPF records set right for your domain?)

Dropping the Spam

Now the final part… getting rid of that spam before it goes to Gmail!

In /etc/postfix/header_checks (you likely will need to create this file), add this simple line:

/^X-Spam-Level: \*{5,}.*/ DISCARD spam

then in /etc/postfix/main.cf:

header_checks = regexp:/etc/postfix/header_checks

This will drop the spam, but you may want to only drop higher level spam, so instead you could change the 5 to a 7, and then add to your /etc/spamassassin/local.cf (might already be there commented out):

rewrite_header Subject *****SPAM*****

This makes it so that any spam that doesn’t get dropped, has SPAM prepended to the header, which Gmail suggests you do if you do end up forwarding spam to Gmail.

With this approach, low score (5-6) spam will be forwarded but makes Gmail happy that you told them its spam ahead of time, and 7+ spam won’t even bother forwarding.

Taking these steps will help you maintain a good mail sending reputation (Hopefully I don’t have to repair mine too much…). Good luck 🙂

Final note for Gmail users

And one final step if you are using Gmail, ensure EVERY email address that you receive mail from that is forwarded to Gmail is added as a “Send Mail As” account. Gmail uses this list to know it is a forwarded address, and will be more lenient in spam rules. I don’t know if other ESP’s do this, but Gmail has requested you do this if you forward mail to them.

0

Ubuntu Live Streaming to Twitch.tv!

Good news for Linux users, the popular application for live streaming on Windows “Open Broadcasting Software” commonly known as OBS has been rewritten and now supports Linux. Ubuntu Live Streaming is now a thing with OBS.

First off, you will need a more up to date ffmpeg, found at the very common ppa:jon-severinsson/ffmpeg PPA.

sudo apt-add-repository ppa:jon-severinsson/ffmpeg
sudo apt-get update
sudo apt-get install ffmpeg

Then you will need the PPA provided by the OBS developer for almost daily updates:

sudo apt-add-repository ppa:btbn/obs-studio
sudo apt-get update
sudo apt-get install obs-studio

You will now have OBS installed.  Newest builds should have an Application Icon added for you, so find it under Audio/Video.

If your new to using this app – a quick run down of the terminology:

  • Scene: Configuration of multiple video/image sources to be output. You can have multiple scenes, such as 1 for left monitor, 1 for right monitor, 1 for ONLY a game, 1 for only a webcam, etc. You can switch between these while streaming to change what you are broadcasting.
  • Sources: Actual video and imagery sources. You add sources to a Scene such as your entire desktop or a single app, or your webcam, or a static image.

Play around with sources, each one should be obvious as to what it does, and build you a setup. When you add a source, you can resize and move it around the screen.

One issue I am having is that it does not work for my Webcam. Webcam works fine for other apps, so this has to be an issue with OBS, and another user also reports the problem.

To put my webcam into my stream, I opened up the Cheese application, then added a new Source that targets only that window, and crops off window parts and other non camera feed parts. I did have to invert Red/Blue.

Since its targeting a window and not the full desktop, you can safely minimize it and it works fine.

Now to stream to Twitch, you need to simply go to settings, go to the Streaming section, and put in your Stream key and select which server is closest to you.

Oh and one final detail (hopefully it hasn’t gotten you yet) The app likes to crash a lot when changing settings, so be sure to close the app after making a few changes to make it save them incase it crashes. I haven’t had any mid-stream crash issues though.

Good luck!

2

NVIDIA SLI + Triple Display on Ubuntu 14.04!

For many months I’ve had a 3rd monitor on my desk, but could not use it as I could not get it to work. Any time I enabled the monitor using Xinerama, the desktop would freeze on login.

I’ve now learned about how the whole XOrg and Nvidia Settings system works.

The trick is that many of the display settings for the nvidia driver are no longer relevant to Xorg.conf, and are now actually in a file in your home directory called .nvidia-settings-rc.

If you are having problems try wiping this file out, and also wipe out your /etc/X11/xorg.conf

Then, if you have SLI cards, issue sudo nvidia-config –sli=on

If you have a single card but MultiGPU, issue sudo nvidia-config –multigpu=on

If you have a SLI MultiGPU card (4+ GPU) then you only need sli, as Xorg.conf told me that multigpu was not necessary at that point.

I’m using the latest Ubuntu 14.04 nvidia-331-updates-uvm driver, which is running more stable than the nvidia-343-uvm from xorg-edgers ppa, so I do not recommend updating to 343.

Once you reboot, run the nvidia settings and ensure Base Mosaic is enabled, and enable all of your monitors in the order you want, and click the Save to X Config button.

But one detail I did not know that caused me so many issues in the past – that all of the OTHER nvidia settings has to be saved separately to that nvidia-settings-rc file.

This file should be saved automatically on close of the settings app, but to be sure go to the nvidia-settings Configuration panel and hit the save button, and simply select your home folder that it opens up to.

Now one important note, when you hit the Save to X button, its going to wipe out your SLI/MultiGPU option! So you need to go back and re-run sudo nvidia-config –sli=on or –multigpu=on to reset that setting.

Now you should be good to restart and have your working setup! I was able to get over 400 FPS in Minecraft (which given its simplicity of graphics, it is a Java game and not the best for performance).

I now have 2 more monitors on the way for Wednesday so I can be close to that “Geek Dream” of a 6+ monitor setup (I’ll be at 5 for now), but hoping to not have any issues with them.

Good luck 🙂

2

Apache 2.4, PHP 5.5 with php-fpm and mod_rewrite

This guide was updated on April 28, 2016 with some missed details!
– Added timeout and flush params to the External Server command, and added missing -socket
– Added missing apache modules actions and alias

So recently I’ve had trouble with the host I had been using for years (bad support, billing broke, DDOS Attacks on their other customers constantly affecting me), so I decided to move my web infrastructure to the same datacenter I run all of our game servers out of, HiVelocity. I decided to fully build this server out fresh instead of trying to clone the old one, and do things better this time around.

First I had recently done research into the performance impact of using Apache MPM-Prefork with mod_php, in that every Apache process has PHP loaded, so even static requests have PHP loaded – eating lots of resources!

I had heard about FastCGI as I had it when I used shared hosting back in the days with SuExec, but now I found something better: PHP-FPM – A special FastCGI based pool that is designed for PHP itself.

Win! so I set up Ubuntu 14.04.1 LTS, Apache 2.4 and went with the newer MPM Event module, which appears to do even better with Keep Alive requests.

So lets get Apache 2.4 with MPM-Event, PHP5 FPM and some PHP5 Modules going:

sudo apt-get install apache2-mpm-event libapache2-mod-fastcgi php5-fpm php5-cli php5-apcu php5-sqlite php5-gd php5-json php5-curl php5-mcrypt php5-mysqlnd php5-redis

By default it should be configured to use sockets, but if not, check in /etc/php5/fpm/pool.d/www.conf for:

listen = /var/run/php5-fpm.sock

And change it if its using a TCP port instead. Unix Sockets are faster as it avoids the TCP protocol.

Next up add /etc/apache2/conf-available/php5-fpm.conf and paste this in:

<IfModule mod_fastcgi.c>
AddHandler php5-fcgi .php
Action php5-fcgi /php5-fcgi
Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi
FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization -idle-timeout 900 -flush
<Directory /usr/lib/cgi-bin>
Require all granted
Require env REDIRECT_STATUS
</Directory>
</IfModule>

Now to enable these things! You need Actions, Alias, FastCgI and Rewrite modules for Apache.

sudo a2enconf php5-fpm
sudo a2enmod actions alias fastcgi rewrite

now here is the part that caused me so much trouble for an entire week! If you want to have mod_rewrite work, you need to edit /etc/apache2/apache2.conf and find the <Directory /var/www/> stanza

By default this has AllowOverride None, and you need to change that to AllowOverride FileInfo

Without this, rewrite rules will not work.

Following this, you should pretty much be set up with a working PHP5-FPM, with mod_rewrite on Apache MPM Event, and have Apache use a lot less resources in general.

I’m sorry if anything in this is off – I went through so many things trying to get everything working, but this is to the best of my knowledge what the final results were.

Please submit any corrections!

0

Ubuntu System Freeze on X58 Motherboard – Solved!

I wrote in another article on how I was having some system instability issues, where the CPU would stall, and everything stopped, no SSH, no TTY, no REISUB, dead!

BIOS updates did not help, changing hardware settings did not help, and I was about ready to sell this PC…

But I found the problem finally!

In the X58 Motherboard, at least this Classified 3, Intel Turbo is on and CxE function is off by default.

Turning Turbo off and CxE to C6 has solved my issue. It appears Turbo is trying to overclock the CPU, and the voltage shortage is freezing things up. Why a default setting can result in such a level of instability is beyond me… but these 2 settings has 100% been the solution to my issue. No more hard shutdowns!

I hope this helps someone else!

0

The quest for triple head on Ubuntu with SLI GPU

I recently purchased a system from my friend to upgrade my old system, as I really wanted 3 monitors…

So, I might of bought a “Gibson” (No, not the guitar, if your on my blog you should get the reference!), but sadly I had tons of trouble getting the 3rd monitor to work under Ubuntu 13.10!

Enabling Xinerama in older nvidia drivers caused the system to hard freeze immediately on login.
Installing nvidia-331 from a third party PPA gave an option for “Base Mosaic”, but same issue….

However, I have been having extremely annoying problems with the system CPU freezing every so often having to hard restart… Ruled out hardware issue, works fine in Windows, but over 2 different 13.10 installs (one was a constant upgrade from 10.04, other was fresh to resolve many other issues I had), the problem was very consistent.

So, it was obviously an ubuntu specific problem. Well, one idea was to try installing 12.04, so I did that last night. Went to install nvidia driver (as I couldn’t even properly boot into the system with these SLI 590 GPU’s) and noticed a new driver on the list… nvidia-331-uvm.

Apparently this is some newer tech from nvidia for improving performance, but either it being uvm or 12.04, Base Mosiac now works.

So, if you are having problems with multi GPU (I have 4 GPU’s with these SLI cards), try 12.04 (or 14.04 when it is out) with nvidia-331-uvm or higher!

Now… here’s hoping 12.04 also fixes my lockup issue!

3

Ubuntu – Could not calculate upgrade 13.10

Just wanted to share some information I found. Many may face this daunting error “Could not calculate upgrade”, and will find post telling them to type

 

“grep Broken /var/log/dist-upgrade/apt.log”

 

Well, I had a ton of broken packages, but I noticed all of them mentioned ~ricotz0

 

Broken brasero:amd64 Depends on libgtk-3-0 [ amd64 ] < 3.8.1+git20130422.0ce7854a-0ubuntu1~12.10~ricotz0 -> 3.8.6-0ubuntu2 > ( libs ) (>= 3.0.0)
Broken brasero:amd64 Depends on libnautilus-extension1a [ amd64 ] < 1:3.6.3-0ubuntu16 -> 1:3.8.2-0ubuntu2 > ( libs ) (>= 1:2.91)
Broken brasero:amd64 Depends on gnome-icon-theme [ amd64 ] < 3.7.3+git20121224.2af6b37d-0ubuntu1~12.10~ricotz0 -> 3.8.3-0ubuntu3 > ( gnome )
Broken libgtk-3-0:amd64 Depends on libgtk-3-common [ amd64 ] < 3.8.1+git20130422.0ce7854a-0ubuntu1~12.10~ricotz0 -> 3.8.6-0ubuntu2 > ( misc ) (= 3.8.1+git20130422.0ce7854a-0ubuntu1~12.10~ricotz0)
Broken libgtk-3-0:amd64 Depends on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (>= 1.0.2)

 

I recognized that to be a PPA I once had, the gnome testing… but I don’t have it right now! So I had no ppa to purge.

However, simply adding it then ppa-purging it removed the PPA and downgraded all of the packages.

This will help resolve the issue for most people exeriencing this problem (I happened to still have problems, but eventually something got it to work).

Hope this helps.

0

2 Way local folder synchronization with lsyncd

So I have a use case scenario at work where 2 way folder synchronization is of use for me. I tried using rsync but its not designed for 2 way.
Then some lovely people in #rsync on Freenode told me of some other tools, namely unison and lsyncd.

Unison does a good job of syncing between 2 dirs, and would solve the problem just fine, but it requires running the tool to do the sync process.

Ideally, I wanted something that would sync immediately on file change, which is exactly what lsyncd does.

However, it’s based on 1 way sync, but due to how it performs it doesn’t have deletion problems like my first attempts at rsync did.

So to provide 2 way with lsyncd, you simply run the tool twice, 1 for each direction.

I’ve wrote a quick bash script that can auto start the sync process for you, designed to be ran as a cronjob to ensure the daemons are running.

Create the file

~/bin/syncdirs

and paste the following in it

#!/usr/bin/env bash
#########################
## syncdirs
##
## syncs 2 directories with 2 directional sync using lsyncd
## apt-get install lsyncd
##
## written by aikar@aikar.co
## http://aikar.co/2011/03/07/2-local-folder-synchronization-lsyncd
##
#########################

sync="lsyncd --delay 0"
if [ $# == 2 ]; then
    if [ -d $1 ] && [ -d $2 ]; then
        d1="$(readlink -f $1)"
        d2="$(readlink -f $2)"
        
        sync1="$sync $d1 $d2"
        sync2="$sync $d2 $d1"
        
        found=0
        ps ax | grep -v grep | grep -q "$sync1" 
        if [ $? != 0 ]; then 
            $sync1
            found=1
        fi
        ps ax | grep -v grep | grep -q "$sync2"
        if [ $? != 0 ]; then 
            $sync2 
            found=1
        fi
        if [ $found == 1 ]; then
            echo "syncing $d1 and $d2"
        else
            echo "was already syncing"
        fi
    else
        echo "syncdirs: ERROR!"
        if [ ! -d $1 ]; then
            echo "$1 is not a directory"
        fi
        if [ ! -d $2 ]; then
            echo "$2 is not a directory"
        fi
    fi
else
    echo "syncdirs usage:"
    echo "syncdirs directory1 directory2"
fi

and then

chmod +x ~/bin/syncdirs

then if you have ~/bin in $PATH, type

syncdirs /path/to/sync /path/to/sync/with

and you should be told the dirs are now syncing.

to add ~/bin to PATH if not already done, edit ~/.bashrc and add

export PATH="$HOME/bin:$PATH"

Now to ensure they are always syncing, type

crontab -e

and add to bottom

* * * * * ~/bin/syncdirs /path/to/sync /path/to/sync/with >/dev/null

Now you should have the sync daemons auto start by cron on system start, and auto relaunch them if they die for any reason.
Disclaimer: I take no responsibility for what this tool does nor my starter script does to your files. (I didn’t even make lsyncd!)
make sure you have backups of your files before running these tools on them!!!

0

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 221 other subscribers

I am Senior Software Engineer and Entrepeneur. I am an enthusiast and love creating things. I operate my own side company in my free time called Starlis LLC, working in Minecraft.

I enjoy doing things right and learning modern technologies.