[{"categories":["Software"],"collections":null,"content":"Nextcloud is a powerful self-hosted cloud platform that includes built-in support for contacts and calendars via the CardDAV and CalDAV protocols. If you\u0026rsquo;re looking to keep your contact groups synced across Android and iOS devices using Nextcloud, this guide will show you how to do it with the help of DAVx⁵—a popular open-source CardDAV/CalDAV sync app for Android. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:0:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#"},{"categories":["Software"],"collections":null,"content":"Why Sync Contact Groups? Syncing contact groups ensures better organization and smoother communication across devices. If you maintain separate personal, business, or family groups in Nextcloud, syncing them helps keep things consistent across your mobile devices. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:1:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#why-sync-contact-groups"},{"categories":["Software"],"collections":null,"content":"Prerequisites A Nextcloud account with the Contacts app enabled. The DAVx⁵ app installed on your Android device. An iOS device with native CardDAV support. Internet access on your devices. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:2:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#prerequisites"},{"categories":["Software"],"collections":null,"content":"Syncing Contact Groups to Android Using DAVx⁵ Install DAVx⁵ Download DAVx⁵ from Google Play or F-Droid. Open DAVx⁵ and Set Up Account Tap the + icon to add an account. Choose Login with URL and user name. Enter the base URL of your Nextcloud instance (e.g., https://cloud.example.com/remote.php/dav). Enter your Nextcloud username and app password. Tap Login. Grant Required Permissions Allow DAVx⁵ access to your contacts and calendars when prompted. Ensure Correct Contact Group Method Before syncing, go to DAVx⁵ → Settings → Contact group method. Set this to \u0026ldquo;Groups are separate vCards\u0026rdquo;. ✅ This setting is necessary to correctly sync groups created on iOS (which uses separate vCards for group membership). ⚠️ Caveat: When using this method, contact groups will not be visible in the Nextcloud web interface, but they will appear correctly on both iOS and Android. Select Address Books to Sync DAVx⁵ will list your available address books from Nextcloud. Enable the ones you want to sync. Verify Groups in Android Contacts App Open your contacts app. You should now see your contact groups (like Family, Friends, etc.). Changes to group membership will sync across devices through Nextcloud. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:3:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#syncing-contact-groups-to-android-using-davx"},{"categories":["Software"],"collections":null,"content":"Syncing Contact Groups to iOS iOS includes native support for CardDAV, so you don’t need a separate app. Open Settings → Contacts → Accounts → Add Account Select \u0026ldquo;Other\u0026rdquo; → Add CardDAV Account Enter Your Nextcloud Credentials Server: https://cloud.example.com/remote.php/dav Username: Your Nextcloud username Password: Your app password Description: Anything you like (e.g., “Nextcloud Contacts”) Save the Account Enable Contacts Ensure the toggle for “Contacts” is enabled. View Contact Groups Open the iOS Contacts app. Tap “Groups” (top-left corner) to view your synced groups. You can choose which ones to display. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:4:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#syncing-contact-groups-to-ios"},{"categories":["Software"],"collections":null,"content":"Troubleshooting Tips Ensure your contacts are properly categorized in Nextcloud using the Contacts app. On Android, make sure DAVx⁵ sync is enabled in the system settings (Accounts → DAVx⁵ → Sync now). If changes don’t show up immediately, force a manual sync. Use app passwords in Nextcloud for added security and easier management. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:5:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#troubleshooting-tips"},{"categories":["Software"],"collections":null,"content":"Final Thoughts With DAVx⁵ and built-in CardDAV support on iOS, syncing contact groups from Nextcloud is both secure and convenient. Whether you’re managing contacts for personal use or coordinating a team, having access to the same structured contact data across devices saves time and avoids confusion. ","date":"30-04-2025","objectID":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/:6:0","tags":["nextcloud","android","ios"],"title":"Sync Nextcloud Contact Groups with Android and iOS","uri":"/posts/software/sync-nextcloud-contact-groups-with-android-and-ios/#final-thoughts"},{"categories":null,"collections":null,"content":"Transferring files between iOS and Android devices has historically been complicated due to ecosystem limitations. LocalSend offers a straightforward, local network-based solution for seamless file sharing across platforms without relying on cloud services or internet connectivity. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:0:0","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#"},{"categories":null,"collections":null,"content":"What is LocalSend? LocalSend is an open-source, cross-platform application that enables direct file transfers between devices on the same local network (Wi-Fi). It supports iOS, Android, Windows, macOS, and Linux. All transfers are encrypted and occur without sending data to external servers, preserving privacy and speed. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:1:0","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#what-is-localsend"},{"categories":null,"collections":null,"content":"How to Use LocalSend for iOS and Android ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:2:0","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#how-to-use-localsend-for-ios-and-android"},{"categories":null,"collections":null,"content":"Step 1: Install LocalSend Download LocalSend from the App Store (iOS) and Google Play Store (Android). Install and grant necessary permissions, such as access to files and the local network. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:2:1","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#step-1-install-localsend"},{"categories":null,"collections":null,"content":"Step 2: Connect to the Same Wi-Fi Network Ensure both devices (iOS and Android) are connected to the same Wi-Fi network. LocalSend relies on local IP addresses, so internet access is not mandatory — just network connectivity. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:2:2","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#step-2-connect-to-the-same-wi-fi-network"},{"categories":null,"collections":null,"content":"Step 3: Open LocalSend on Both Devices Launch LocalSend on both devices. The app will automatically detect nearby devices running LocalSend. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:2:3","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#step-3-open-localsend-on-both-devices"},{"categories":null,"collections":null,"content":"Step 4: Send Files On the sending device: Tap Send. Choose the files you wish to transfer. Select the target device from the detected list. On the receiving device: A prompt will appear asking to accept the incoming file. Accept to complete the transfer. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:2:4","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#step-4-send-files"},{"categories":null,"collections":null,"content":"Key Features Cross-Platform: Works across iOS, Android, and desktop systems. Local Transfer: No external internet connection or server involved. Encryption: End-to-end encryption ensures file security. No Account Needed: No sign-up or login is required. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:3:0","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#key-features"},{"categories":null,"collections":null,"content":"Conclusion LocalSend simplifies cross-platform file sharing, offering an efficient and secure method for transferring files between iOS and Android devices over a local network. It is a practical solution for users seeking independence from proprietary cloud services. ","date":"29-04-2025","objectID":"/posts/software/send-files-between-ios-and-android-with-localsend/:4:0","tags":["android","ios"],"title":"Send Files Between iOS and Android With LocalSend","uri":"/posts/software/send-files-between-ios-and-android-with-localsend/#conclusion"},{"categories":null,"collections":null,"content":"Scrcpy is a free and open-source tool that allows you to mirror your Android device’s screen to your computer. It provides high-performance, low-latency screen mirroring and works over USB and Wi-Fi connections. This guide explains how to set up and use Scrcpy efficiently. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:0:0","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#"},{"categories":null,"collections":null,"content":"Prerequisites Android device with USB debugging enabled Computer (Windows, macOS, or Linux) Scrcpy installed on the computer ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:1:0","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#prerequisites"},{"categories":null,"collections":null,"content":"Step-by-Step Guide ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:0","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#step-by-step-guide"},{"categories":null,"collections":null,"content":"1. Install Scrcpy on Your Computer Windows: Download and run scrcpy.exe from the GitHub release page. macOS: Install via Homebrew: brew install scrcpy Linux: Install using your package manager, e.g., sudo apt install scrcpy ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:1","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#1-install-scrcpy-on-your-computer"},{"categories":null,"collections":null,"content":"2. Enable USB Debugging on Your Android Device Go to Settings \u0026gt; About phone and tap Build number 7 times to activate developer mode. In Settings \u0026gt; Developer options, enable USB debugging. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:2","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#2-enable-usb-debugging-on-your-android-device"},{"categories":null,"collections":null,"content":"3. Connect Your Device via USB Connect the Android device to your computer using a USB cable. Accept the USB debugging prompt on the Android device. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:3","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#3-connect-your-device-via-usb"},{"categories":null,"collections":null,"content":"4. Start Scrcpy Open a command-line interface. Run: scrcpy Scrcpy will automatically detect the connected Android device. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:4","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#4-start-scrcpy"},{"categories":null,"collections":null,"content":"5. Mirroring Over Wi-Fi Connect your Android device via USB initially. Run: adb tcpip 5555 Find your device\u0026rsquo;s IP address. Disconnect the USB cable. Connect over Wi-Fi: adb connect \u0026lt;device_ip_address\u0026gt; Start Scrcpy again by running scrcpy. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:5","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#5-mirroring-over-wi-fi"},{"categories":null,"collections":null,"content":"6. Control the Device Use your computer’s mouse and keyboard to interact with the Android device directly via Scrcpy. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:2:6","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#6-control-the-device"},{"categories":null,"collections":null,"content":"Conclusion Scrcpy provides a fast, reliable, and simple method for mirroring and controlling Android devices on a computer. It is suitable for developers, presenters, and users who need remote access without additional heavy software. ","date":"28-04-2025","objectID":"/posts/software/android-mirroring-with-scrcpy/:3:0","tags":["android","mac","windows","linux"],"title":"Android Mirroring with Scrcpy","uri":"/posts/software/android-mirroring-with-scrcpy/#conclusion"},{"categories":["Software"],"collections":null,"content":"While Time Machine is traditionally used with an Apple Time Capsule or another Mac, it\u0026rsquo;s possible to set it up with a Windows machine as the server. This guide outlines the steps involved. ","date":"05-12-2024","objectID":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/:0:0","tags":["mac"],"title":"Backing Up Your Mac to a Windows Time Machine Server","uri":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/#"},{"categories":["Software"],"collections":null,"content":"Prerequisites A Windows machine with sufficient storage space. A Mac running macOS Monterey or later. Basic familiarity with file sharing and network settings on both platforms. ","date":"05-12-2024","objectID":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/:1:0","tags":["mac"],"title":"Backing Up Your Mac to a Windows Time Machine Server","uri":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/#prerequisites"},{"categories":["Software"],"collections":null,"content":"Step-by-Step Guide 1. Setting Up the Windows Server: Create a Dedicated Folder: On your Windows machine, create a folder named \u0026ldquo;TM\u0026rdquo; and ensure it has sufficient space for your backups. Share the Folder: Configure the \u0026ldquo;TM\u0026rdquo; folder to be shared over your network. This will allow your Mac to access it. 2. Preparing the Mac: Create a Sparsebundle Image: Open Disk Utility on your Mac. Click \u0026ldquo;File\u0026rdquo; \u0026gt; \u0026ldquo;New Image\u0026rdquo; \u0026gt; \u0026ldquo;Sparse Bundle\u0026rdquo;. Name the image \u0026ldquo;tm.dmg.sparsebundle\u0026rdquo;. Select \u0026ldquo;HFS+\u0026rdquo; as the partition type. Important: Enable 256-bit encryption for added security. Transfer the Image: Connect to your Windows machine\u0026rsquo;s shared \u0026ldquo;TM\u0026rdquo; folder. Move the \u0026ldquo;tm.dmg.sparsebundle\u0026rdquo; file into the \u0026ldquo;TM\u0026rdquo; folder on the Windows machine. 3. Configuring Time Machine: Mount the Image: Connect to your Windows machine via network and mount the \u0026ldquo;tm.dmg.sparsebundle\u0026rdquo; image. Set Destination: Open Terminal on your Mac and run the command tmutil setdestination /Volumes/TM (replace \u0026ldquo;TM\u0026rdquo; with the actual volume name of your mounted sparsebundle image). Configure Time Machine Settings: Go to System Settings \u0026gt; General \u0026gt; Time Machine. Select your new Time Machine destination (the Windows server). 4. Initiate Backup: Right-click on the Time Machine icon and select \u0026ldquo;Backup Now\u0026rdquo;. Time Machine will now begin backing up your Mac to the Windows server. 5. Restoring from Backup: Mount the \u0026ldquo;tm.dmg.sparsebundle\u0026rdquo; image on your Mac. Launch Migration Assistant. Select \u0026ldquo;Time Machine Restoration\u0026rdquo; and choose your backup from the available options. Follow the on-screen instructions to restore your data. ","date":"05-12-2024","objectID":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/:2:0","tags":["mac"],"title":"Backing Up Your Mac to a Windows Time Machine Server","uri":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/#step-by-step-guide"},{"categories":["Software"],"collections":null,"content":"Important Notes Your Mac will need to be connected to the network and have access to the shared folder on the Windows machine for backups to occur. Remember to regularly check the status of your Time Machine backups. ","date":"05-12-2024","objectID":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/:3:0","tags":["mac"],"title":"Backing Up Your Mac to a Windows Time Machine Server","uri":"/posts/software/backing-up-your-mac-to-a-windows-time-machine-server/#important-notes"},{"categories":["Software"],"collections":null,"content":"Many Mac users experience slowdowns on their hard drives (HDDs) when they are partitioned with APFS and reach full capacity. This is because APFS, while efficient, doesn\u0026rsquo;t offer defragmentation capabilities like HFS+. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:0:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#"},{"categories":["Software"],"collections":null,"content":"The Problem with APFS APFS (Apple File System) is a modern file system designed for speed and efficiency on solid-state drives (SSDs). However, it doesn\u0026rsquo;t have built-in defragmentation, which can lead to performance issues when the drive becomes full. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:1:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#the-problem-with-apfs"},{"categories":["Software"],"collections":null,"content":"The Solution: Switching to HFS+ A common solution is to reformat the HDD with HFS+ (Mac OS Extended). This allows for defragmentation, potentially improving performance. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:2:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#the-solution--switching-to-hfs"},{"categories":["Software"],"collections":null,"content":"Encryption Challenges with HFS+ However, there\u0026rsquo;s a catch. Disk Utility no longer supports encrypting HFS+ partitions directly. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:3:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#encryption-challenges-with-hfs"},{"categories":["Software"],"collections":null,"content":"The Workaround: Sparsebundle Images The workaround is to create a Sparsebundle Image with the following features: Format: HFS+ Encryption: Enabled (using a strong password) This encrypted Sparsebundle Image can then be placed within the HDD. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:4:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#the-workaround-sparsebundle-images"},{"categories":["Software"],"collections":null,"content":"Benefits and Drawbacks Benefits: Defragmentation: HFS+ allows for defragmentation, potentially improving HDD performance. Encryption: Your data remains secure with robust encryption. Drawbacks: Dual Mounting: You\u0026rsquo;ll need to mount both the HDD and the Sparsebundle Image separately to access your files. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:5:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#benefits-and-drawbacks"},{"categories":["Software"],"collections":null,"content":"Conclusion While switching to HFS+ with encryption requires a bit more effort, it offers a viable solution for improving the performance of your HDD while keeping your data secure. Remember to choose a strong password for your Sparsebundle Image and store it securely. ","date":"04-12-2024","objectID":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/:6:0","tags":["mac"],"title":"HDD Getting Slow With APFS Partition on Mac","uri":"/posts/software/hdd-getting-slow-with-apfs-partition-on-mac/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Sometimes, when working with VirtualBox on Windows, you might encounter an issue where your virtual machine list doesn\u0026rsquo;t update correctly, even after making changes like unlocking an encrypted drive. This can be frustrating when you\u0026rsquo;re trying to manage your VMs. This article will guide you through a solution to refresh your VirtualBox VM list via the command line interface (CLI) accessed via SSH. ","date":"14-11-2024","objectID":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/:0:0","tags":["windows","ssh","virtualbox"],"title":"Refreshing Your Virtual Machine List in VirtualBox on Windows via SSH CLI","uri":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/#"},{"categories":["DevOps"],"collections":null,"content":"The Problem Let\u0026rsquo;s say you have a virtual machine stored on an encrypted drive. When Windows starts, the drive is locked. You can unlock the drive via SSH, but when you try to list your virtual machines using vboxmanage within your SSH session, you might still get an \u0026ldquo;access denied\u0026rdquo; error. ","date":"14-11-2024","objectID":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/:0:1","tags":["windows","ssh","virtualbox"],"title":"Refreshing Your Virtual Machine List in VirtualBox on Windows via SSH CLI","uri":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/#the-problem"},{"categories":["DevOps"],"collections":null,"content":"The Solution: Restarting the VirtualBox Service The solution to this issue is surprisingly simple: restart the VirtualBox service. SSH into your Windows Machine: Establish an SSH connection to your Windows machine. Stop the VirtualBox Service: Once connected, use the following command to stop the VirtualBox service: net stop \u0026#34;VirtualBox system Service\u0026#34; Start the VirtualBox Service: After the service has stopped, use the following command to start the VirtualBox service again: net start \u0026#34;VirtualBox system Service\u0026#34; List Your VMs: Now, try listing your virtual machines again using the following command: vboxmanage list vms Your virtual machine list should now be updated and display your unlocked encrypted drive VM correctly. ","date":"14-11-2024","objectID":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/:0:2","tags":["windows","ssh","virtualbox"],"title":"Refreshing Your Virtual Machine List in VirtualBox on Windows via SSH CLI","uri":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/#the-solution-restarting-the-virtualbox-service"},{"categories":["DevOps"],"collections":null,"content":"Conclusion By restarting the VirtualBox service, you ensure that VirtualBox is aware of any changes to your drive, including unlocking encrypted drives. This simple step can save you a lot of frustration when managing your virtual machines. ","date":"14-11-2024","objectID":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/:0:3","tags":["windows","ssh","virtualbox"],"title":"Refreshing Your Virtual Machine List in VirtualBox on Windows via SSH CLI","uri":"/posts/devops/refreshing-your-virtual-machine-list-in-virtualbox-on-windows-via-ssh-cli/#conclusion"},{"categories":["Software"],"collections":null,"content":"This guide explains how to delete referenced photos from Mac Photos and safely remove referenced photos using a dedicated application. ","date":"08-11-2024","objectID":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/:0:0","tags":["mac","photos"],"title":"Delete Referenced Photos Origin Files From Mac Photos","uri":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/#"},{"categories":["Software"],"collections":null,"content":"Export Referenced Photos as Original Photos Open the Photos app on your Mac. Select the photos you want to delete. In the File menu, choose Export Original. Select a designated folder to save the exported photos. ","date":"08-11-2024","objectID":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/:1:0","tags":["mac","photos"],"title":"Delete Referenced Photos Origin Files From Mac Photos","uri":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/#export-referenced-photos-as-original-photos"},{"categories":["Software"],"collections":null,"content":"Find Duplicate Files Download and install a duplicate finder application like DupeGuru. Launch DupeGuru and select the folder where you saved the exported photos and referenced photos folder. Run a scan to identify duplicate files. ","date":"08-11-2024","objectID":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/:2:0","tags":["mac","photos"],"title":"Delete Referenced Photos Origin Files From Mac Photos","uri":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/#find-duplicate-files"},{"categories":["Software"],"collections":null,"content":"Delete Duplicate Files Review the list of duplicate files generated by DupeGuru. Delete all identified duplicate files. Note: This process assumes all file names are unique. If duplicates have similar names, you might need to manually review and select the correct files to delete. ","date":"08-11-2024","objectID":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/:3:0","tags":["mac","photos"],"title":"Delete Referenced Photos Origin Files From Mac Photos","uri":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/#delete-duplicate-files"},{"categories":["Software"],"collections":null,"content":"Delete Photos from Mac Photos Return to the Photos app. The marked photos that were exported will now display a question mark indicating that the file is missing. Select the photos and choose Delete from the Edit menu. By following these steps, you can effectively delete photos from Mac Photos and remove any duplicate files, freeing up valuable storage space. ","date":"08-11-2024","objectID":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/:4:0","tags":["mac","photos"],"title":"Delete Referenced Photos Origin Files From Mac Photos","uri":"/posts/software/delete-referenced-photos-origin-files-from-mac-photos/#delete-photos-from-mac-photos"},{"categories":["DevOps"],"collections":null,"content":"This article provides a step-by-step procedure for unlocking and accessing a Luks encrypted disk using SSH on an Ubuntu server. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:0:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#"},{"categories":["DevOps"],"collections":null,"content":"Install Dropbear To enable SSH access on your Ubuntu server, you\u0026rsquo;ll need to install Dropbear. Run the following commands: sudo apt update sudo apt upgrade sudo apt install dropbear-initramfs ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:1:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#install-dropbear"},{"categories":["DevOps"],"collections":null,"content":"Configure Dropbear Edit the /etc/dropbear/initramfs/dropbear.conf file and add the following configuration: Config DROPBEAR_OPTIONS: DROPBEAR_OPTIONS=\u0026#34;-s -j -k -p 2222 -I 60\u0026#34; This configuration enables secure (private key-based) SSH connections, sets the port to 2222, and sets the idle timeout to 60 seconds. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:2:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#configure-dropbear"},{"categories":["DevOps"],"collections":null,"content":"Set Static IP Address Edit the /etc/initramfs-tools/initramfs.conf file and add the following configuration: IP=192.168.1.10::192.168.1.1:255.255.255.0:hostname This sets a static IP address for your server. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:3:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#set-static-ip-address"},{"categories":["DevOps"],"collections":null,"content":"Generate Initramfs Image Update the initramfs image to include the new configuration: sudo update-initramfs -u ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:4:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#generate-initramfs-image"},{"categories":["DevOps"],"collections":null,"content":"Add Public Keys to Authorized Keys File Copy the public key from your local machine to the /etc/dropbear/initramfs/authorized_keys file. You can do this by running cat /path/to/public/key \u0026gt; /etc/dropbear/initramfs/authorized_keys. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:5:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#add-public-keys-to-authorized-keys-file"},{"categories":["DevOps"],"collections":null,"content":"Unlock Luks Encrypted Disk via SSH To unlock the Luks encrypted disk, you\u0026rsquo;ll need to use the cryptroot-unlock command from within an SSH session. Run the following commands: sudo fuser -c -e cryptroot /dev/mapper/your-luks-device-name Replace /dev/mapper/your-luks-device-name with the actual device name of your Luks encrypted disk. Once you\u0026rsquo;ve unlocked the disk, you can access its contents using standard Linux commands. You\u0026rsquo;ll be prompted to enter the unlock password when connected via SSH. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:6:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#unlock-luks-encrypted-disk-via-ssh"},{"categories":["DevOps"],"collections":null,"content":"Update Initramfs and Restart OS Update the initramfs image again: sudo update-initramfs -u Restart your Ubuntu server to apply the new configuration. When you restart, you should be able to access your Luks encrypted disk and unlock it via SSH using the cryptroot-unlock command. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:7:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#update-initramfs-and-restart-os"},{"categories":["DevOps"],"collections":null,"content":"Verify SSH Connection and Unlock Luks Encrypted Disk Once connected via SSH, you can verify that the connection is working by running ssh -v or ssh -T. If you want to unlock your Luks encrypted disk again during this SSH session, you can run: cryptroot-unlock Enter the unlock password when prompted. You\u0026rsquo;ll be able to access the contents of your Luks encrypted disk using standard Linux commands. That\u0026rsquo;s it! You should now be able to access your Ubuntu server via SSH and unlock the Luks encrypted disk. ","date":"08-11-2024","objectID":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/:8:0","tags":["linux","ubuntu","ssh"],"title":"Unlocking Luks Encrypted Disk via SSH on Ubuntu Server","uri":"/posts/devops/unlocking-luks-encrypted-disk-via-ssh-on-ubuntu-server/#verify-ssh-connection-and-unlock-luks-encrypted-disk"},{"categories":["devops"],"collections":null,"content":"This article explains how to address a potential issue with OpenSSH configuration after uninstalling and reinstalling the service. ","date":"08-11-2024","objectID":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/:0:0","tags":["windows","ssh"],"title":"Resolving Missing OpenSSH Service Configuration After Uninstallation","uri":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/#"},{"categories":["devops"],"collections":null,"content":"The Problem When OpenSSH is uninstalled via the Installer, the configuration for the sshd service might be removed. Even if you reinstall OpenSSH through the Feature option, the configuration may still be missing. This can prevent SSH from functioning properly. ","date":"08-11-2024","objectID":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/:1:0","tags":["windows","ssh"],"title":"Resolving Missing OpenSSH Service Configuration After Uninstallation","uri":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/#the-problem"},{"categories":["devops"],"collections":null,"content":"The Missing Registry Key Specifically, the following registry key is often missing after uninstallation: HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\sshdThis key is responsible for defining the \u0026ldquo;OpenSSH SSH Server\u0026rdquo; service. ","date":"08-11-2024","objectID":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/:2:0","tags":["windows","ssh"],"title":"Resolving Missing OpenSSH Service Configuration After Uninstallation","uri":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/#the-missing-registry-key"},{"categories":["devops"],"collections":null,"content":"The Solution To resolve this issue, you need to manually recreate the missing registry key and its corresponding values. Let me know if you\u0026rsquo;d like more detailed instructions on how to manually recreate the sshd registry key. I can provide specific steps based on your operating system. Here the the default sshd registry: openssh-ssh-server-registry.reg ","date":"08-11-2024","objectID":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/:3:0","tags":["windows","ssh"],"title":"Resolving Missing OpenSSH Service Configuration After Uninstallation","uri":"/posts/devops/resolving-missing-openssh-service-configuration-after-uninstallation/#the-solution"},{"categories":["devops"],"collections":null,"content":"While SSH is commonly associated with Linux and Unix systems, it can also be configured on Windows. This guide focuses on resolving SSH key loading issues specifically for Windows environments. ","date":"07-11-2024","objectID":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/:0:0","tags":["windows","ssh"],"title":"Resolving SSH Key Loading Issues on Windows","uri":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/#"},{"categories":["devops"],"collections":null,"content":"Understanding the Challenge Similar to Linux/Unix, the default SSH configuration on Windows might only load keys authorized by administrators, typically stored in a location like %ProgramData%\\ssh\\administrators_authorized_keys. This restricts user access unless they are explicitly added to the administrator-authorized keys. ","date":"07-11-2024","objectID":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/:1:0","tags":["windows","ssh"],"title":"Resolving SSH Key Loading Issues on Windows","uri":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/#understanding-the-challenge"},{"categories":["devops"],"collections":null,"content":"Enabling User-Authorized Keys on Windows Locate the sshd_config File: On Windows, the sshd_config file is usually found in C:\\ProgramData\\ssh. Add User-Authorized Keys Configuration: Append the following line to the sshd_config file, ensuring it\u0026rsquo;s placed below any existing AuthorizedKeysFile directives: AuthorizedKeysFile ~/.ssh/authorized_keysThis instructs OpenSSH for Windows to load keys from the .ssh/authorized_keys file located in each user\u0026rsquo;s home directory. Disable Administrator-Only Keys: Locate and comment out the following line in the sshd_config file: AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keysAdding a # at the beginning of the line will effectively disable it. Restart the SSH Server: After making changes to the sshd_config file, restart the SSH server for the modifications to take effect. Command Prompt: net stop sshd \u0026amp;\u0026amp; net start sshd Create .ssh Directory: Ensure each user\u0026rsquo;s home directory contains a .ssh subdirectory. You can create it manually if it doesn\u0026rsquo;t exist. Generate Public-Private Key Pairs: Users should generate their own public-private key pairs using the ssh-keygen command (which is included with OpenSSH for Windows). Add Public Key to authorized_keys: Users should copy their public key (the content of the id_rsa.pub file) and add it to their ~/.ssh/authorized_keys file. By following these steps, you can configure OpenSSH for Windows to allow users to connect securely with their own authorized keys, granting them authorized access to the server. ","date":"07-11-2024","objectID":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/:2:0","tags":["windows","ssh"],"title":"Resolving SSH Key Loading Issues on Windows","uri":"/posts/devops/resolving-ssh-key-loading-issues-on-windows/#enabling-user-authorized-keys-on-windows"},{"categories":["DevOps"],"collections":null,"content":"Attempting to perform an rsync operation from a Linux or macOS system to a Windows machine directly using the rsync command will fail. This is because Windows does not have a built-in rsync utility. A typical command like rsync /source/ target-hostname:/mnt/e/target/ will result in an error. ","date":"06-11-2024","objectID":"/posts/devops/rsync-to-windows-open-ssh/:0:0","tags":["linux","windows","ssh","bash"],"title":"Rsync to Windows Open SSH","uri":"/posts/devops/rsync-to-windows-open-ssh/#"},{"categories":["DevOps"],"collections":null,"content":"Solution Leverage the power of the Windows Subsystem for Linux (WSL) to enable rsync functionality on your Windows system. ","date":"06-11-2024","objectID":"/posts/devops/rsync-to-windows-open-ssh/:1:0","tags":["linux","windows","ssh","bash"],"title":"Rsync to Windows Open SSH","uri":"/posts/devops/rsync-to-windows-open-ssh/#solution"},{"categories":["DevOps"],"collections":null,"content":"Prerequisites Enable OpenSSH Feature on Windows: Open Windows Features (search for it in the Start menu). Ensure the OpenSSH Server option is checked. Restart your computer for the changes to take effect. ","date":"06-11-2024","objectID":"/posts/devops/rsync-to-windows-open-ssh/:2:0","tags":["linux","windows","ssh","bash"],"title":"Rsync to Windows Open SSH","uri":"/posts/devops/rsync-to-windows-open-ssh/#prerequisites"},{"categories":["DevOps"],"collections":null,"content":"Steps Install WSL: Open the Microsoft Store and search for \u0026ldquo;Windows Subsystem for Linux\u0026rdquo;. Choose a distribution (e.g., Ubuntu) and install it. Install a Linux Distribution within WSL: After installation, open a WSL terminal (e.g., Ubuntu). Update the package list: sudo apt update Install the rsync package: sudo apt install rsync Execute Rsync with WSL: Modify your rsync command to include the wsl prefix for the rsync path: rsync --rsync-path=\u0026#39;wsl rsync\u0026#39; /source/ target-hostname:/mnt/e/target/ Replace /source/ with your local source directory. Replace target-hostname with the hostname of your Windows machine. Replace /mnt/e/target/ with the desired target directory on your Windows machine. The path should follow the Linux path format instead of the Windows format (E:\\target). Now, rsync should successfully transfer files between your Linux/macOS system and your Windows machine through WSL. ","date":"06-11-2024","objectID":"/posts/devops/rsync-to-windows-open-ssh/:3:0","tags":["linux","windows","ssh","bash"],"title":"Rsync to Windows Open SSH","uri":"/posts/devops/rsync-to-windows-open-ssh/#steps"},{"categories":["Development"],"collections":null,"content":"GitHub mobile app does not allow creating project items, forcing users to access the web version via Safari browser. However, when using the board view on an iPhone running Safari, the \u0026ldquo;Add Item\u0026rdquo; input is not visible when scrolling down. ","date":"20-08-2024","objectID":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/:0:0","tags":null,"title":"Unable to Create GitHub Project Items on iPhone Safari Browser","uri":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/#"},{"categories":["Development"],"collections":null,"content":"Solution 1: Add to Home Screen Open github.com from your iPhone\u0026rsquo;s Safari browser. Click the \u0026ldquo;Share\u0026rdquo; button on the bottom of the navigation bar. Select \u0026ldquo;Add to Home Screen.\u0026rdquo; On the \u0026ldquo;Add to Home Screen\u0026rdquo; page, click the \u0026ldquo;Add\u0026rdquo; button. Access GitHub from your home screen. You will now be using the PWA version without a navigation bar. ","date":"20-08-2024","objectID":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/:1:0","tags":null,"title":"Unable to Create GitHub Project Items on iPhone Safari Browser","uri":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/#solution-1-add-to-home-screen"},{"categories":["Development"],"collections":null,"content":"Solution 2: Use Chrome App Alternatively, you can also use the Google Chrome app on your iPhone, which displays the \u0026ldquo;Add Item\u0026rdquo; input correctly when accessing GitHub project boards. By following these solutions, you should be able to create GitHub project items and access the board view without any issues on your iPhone. ","date":"20-08-2024","objectID":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/:2:0","tags":null,"title":"Unable to Create GitHub Project Items on iPhone Safari Browser","uri":"/posts/development/unable-to-create-github-project-items-on-iphone-safari-browser/#solution-2-use-chrome-app"},{"categories":["Software"],"collections":null,"content":"In this tutorial, we\u0026rsquo;ll guide you through the process of installing and setting up the ollama local AI chat service in VSCode on your Mac. We\u0026rsquo;ll also explore how to integrate it with the Continue extension for seamless AI-powered coding experiences. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:0:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Install ollama To get started, open your terminal and run the following command: brew install ollama This will install ollama on your system. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:1:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-1-install-ollama"},{"categories":["Software"],"collections":null,"content":"Step 2: Pull Model (e.g., llama3) Once installed, you\u0026rsquo;ll need to pull a model for use with ollama. In this example, we\u0026rsquo;ll pull the llama3 model: ollama pull llama3 This will download and set up the selected model. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:2:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-2-pull-model-eg-llama3"},{"categories":["Software"],"collections":null,"content":"Step 3: Run ollama Service Next, start the ollama service: ollama serve This will launch the ollama server on your local machine. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:3:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-3-run-ollama-service"},{"categories":["Software"],"collections":null,"content":"Step 4: Install Continue Extension in VSCode Now, open VSCode and install the Continue extension from the Marketplace by searching for \u0026ldquo;Continue\u0026rdquo; or clicking this link: https://marketplace.visualstudio.com/items?itemName=Continue.continue Once installed, you can activate the extension by clicking on the Continue icon in the top-right corner of your VSCode window. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:4:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-4-install-continue-extension-in-vscode"},{"categories":["Software"],"collections":null,"content":"Step 5: Configure Continue Extension To integrate ollama with the Continue extension: Open the Continue tab. Click \u0026ldquo;Add New Model\u0026rdquo;. Select \u0026ldquo;Ollama for Local AI\u0026rdquo; as the model type. Choose \u0026ldquo;Autodetect\u0026rdquo; to add all installed ollama models. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:5:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-5-configure-continue-extension"},{"categories":["Software"],"collections":null,"content":"Step 6: Engage with Your AI With ollama and Continue set up, you can now ask questions or seek answers directly from the Continue tab in VSCode. You can also use keyboard shortcuts visible when selecting code to engage with your AI model. By following these steps, you\u0026rsquo;ll be able to leverage local AI chat capabilities within VSCode on your Mac. ","date":"21-06-2024","objectID":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/:6:0","tags":["mac","ai","vscode"],"title":"Setting up Local AI Chat on VSCode on Mac","uri":"/posts/software/setting-up-local-ai-chat-on-vscode-on-mac/#step-6-engage-with-your-ai"},{"categories":["Software"],"collections":null,"content":"In this tutorial, we will set up a local AI chat client using ollama and Open WebUI. This will allow us to interact with our AI model locally without relying on any cloud services. ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:0:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Install ollama First, install ollama using Homebrew: brew install ollama ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:1:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#install-ollama"},{"categories":["Software"],"collections":null,"content":"Install model for ollama Next, we need to download the model we want to use. For this example, we\u0026rsquo;ll use the \u0026ldquo;llama3\u0026rdquo; model: ollama pull llama3 ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:2:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#install-model-for-ollama"},{"categories":["Software"],"collections":null,"content":"Start ollama service Start the ollama service: ollama serve This will start the ollama server, which we can then connect to using Open WebUI. ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:3:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#start-ollama-service"},{"categories":["Software"],"collections":null,"content":"Install Open WebUI To install Open WebUI, create a docker-compose.yml file with the following content: version: \u0026#34;3.8\u0026#34; services: app: image: ghcr.io/open-webui/open-webui:main restart: always ports: - \u0026#34;$PORT:8080\u0026#34; volumes: - data:/app/backend/data networks: - default environment: HOST_GATEWAY: host-gateway networks: default: volumes: data: Then, create a .env file with the following content: PORT=4000 ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:4:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#install-open-webui"},{"categories":["Software"],"collections":null,"content":"Run docker compose Finally, run docker-compose up -d to start the Open WebUI service in detached mode. docker-compose up -d This will start the Open WebUI server and make it available at http://localhost:4000. ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:5:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#run-docker-compose"},{"categories":["Software"],"collections":null,"content":"Open web browser and interact with AI Open a web browser and navigate to http://localhost:4000. You should see the Open WebUI interface. Select the \u0026ldquo;llama3\u0026rdquo; model, and you\u0026rsquo;re ready to start interacting with your AI locally. You can now create articles, have conversations, and more using your local AI chat client. ","date":"20-06-2024","objectID":"/posts/software/local-ai-chat-on-web-based-client-on-mac/:6:0","tags":["mac","ai"],"title":"Local AI Chat on Web-Based Client on Mac","uri":"/posts/software/local-ai-chat-on-web-based-client-on-mac/#open-web-browser-and-interact-with-ai"},{"categories":["Software"],"collections":null,"content":"In this post, we\u0026rsquo;ll be diving into the installation and usage of ollama, a local chat AI that runs on your Mac. ","date":"19-06-2024","objectID":"/posts/software/local-chat-ai-on-mac/:0:0","tags":["mac","ai"],"title":"Local Chat AI on Mac","uri":"/posts/software/local-chat-ai-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Installing Ollama brew install ollama Once installed, you can pull down a pre-trained model (in this case, we\u0026rsquo;ll be using the \u0026ldquo;llama3\u0026rdquo; model): ollama pull llama3 ","date":"19-06-2024","objectID":"/posts/software/local-chat-ai-on-mac/:1:0","tags":["mac","ai"],"title":"Local Chat AI on Mac","uri":"/posts/software/local-chat-ai-on-mac/#installing-ollama"},{"categories":["Software"],"collections":null,"content":"Serving Ollama ollama serve This will start the ollama server and make it available for you to interact with. ","date":"19-06-2024","objectID":"/posts/software/local-chat-ai-on-mac/:2:0","tags":["mac","ai"],"title":"Local Chat AI on Mac","uri":"/posts/software/local-chat-ai-on-mac/#serving-ollama"},{"categories":["Software"],"collections":null,"content":"Creating a New Terminal Session and Running Llama Open up another terminal session (or a new tab) and run ollama with the pulled model: ollama run llama3 ","date":"19-06-2024","objectID":"/posts/software/local-chat-ai-on-mac/:3:0","tags":["mac","ai"],"title":"Local Chat AI on Mac","uri":"/posts/software/local-chat-ai-on-mac/#creating-a-new-terminal-session-and-running-llama"},{"categories":["Software"],"collections":null,"content":"Chatting with Llama AI Now, you can start chatting with your very own local chat AI! Simply type away in this terminal session, and ollama will respond to your questions and engage in conversation. ","date":"19-06-2024","objectID":"/posts/software/local-chat-ai-on-mac/:4:0","tags":["mac","ai"],"title":"Local Chat AI on Mac","uri":"/posts/software/local-chat-ai-on-mac/#chatting-with-llama-ai"},{"categories":["DevOps","Software"],"collections":null,"content":"We will demonstrate a basic network address translation (NAT) setup, allowing multiple devices to access a server through various routers and wireless networks. This setup enables multiple devices from different locations to connect to a single server, making it a crucial component of many modern networks. ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:0:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 1: Adding Devices Open Cisco Packet Tracer and create the following devices: Laptop0 Smartphone0 PC0 Wireless Router0 Wireless Router1 Wireless Router2 Server0 ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:1:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#step-1-adding-devices"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 2: Configuring IP Addresses Configure the IP addresses as follows: Laptop0: DHCP (obtain an IP address from the router) Smartphone0: DHCP (obtain an IP address from the router) PC0: DHCP (obtain an IP address from the router) Wireless Router0: Internet IP - DHCP, Router IP - 192.168.1.1/24 Wireless Router1: Internet IP - DHCP, Router IP - 192.168.2.1/24 Wireless Router2: Internet IP - 10.0.0.100/8, Router IP - 192.168.0.1/24 Server0: IP - 10.0.0.101/8 ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:2:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#step-2-configuring-ip-addresses"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 3: Creating Connections Create the following connections: Cable from PC0 (LAN port) to Wireless Router1 (LAN port) Cable from Wireless Router0 (Internet port) to Wireless Router2 (LAN port) Cable from Wireless Router1 (Internet port) to Wireless Router2 (LAN port) Cable from Wireless Router2 (Internet port) to Server0 (LAN port) ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:3:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#step-3-creating-connections"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 4: Testing Connectivity Test the connectivity by: Pinging from each device to Server0: Laptop0 Smartphone0 PC0 All pings should be successful, indicating that NAT is working as expected. ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:4:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#step-4-testing-connectivity"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 5: Testing Web Browsing Test web browsing from each device to Server0: Open a web browser on Laptop0 and access the IP address of Server0 (e.g., http://10.0.0.101) The web page should load successfully. Repeat the same test with Smartphone0 and PC0. ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:5:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#step-5-testing-web-browsing"},{"categories":["DevOps","Software"],"collections":null,"content":"Expected Results All devices should be able to access the server through different routers and wireless networks, indicating that NAT is working as expected. This article demonstrates a basic NAT setup using Cisco Packet Tracer, where devices can access a server through different routers and wireless networks. The NAT allows devices on different subnets to communicate with each other. ","date":"14-06-2024","objectID":"/posts/devops/nat-setup-using-cisco-packet-tracer/:6:0","tags":["packet-tracer"],"title":"NAT Setup Using Cisco Packet Tracer","uri":"/posts/devops/nat-setup-using-cisco-packet-tracer/#expected-results"},{"categories":["DevOps","Software"],"collections":null,"content":"This guide uses Cisco Packet Tracer software to demonstrate switching and routing concepts, allowing users to explore and understand network architectures in a virtual laboratory environment. With this resource, users can design, build, test, and troubleshoot networks, experimenting with different scenarios and configurations in a risk-free setting. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:0:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#"},{"categories":["DevOps","Software"],"collections":null,"content":"Cross Over Cable We will set up a simple network with two devices (PC1 and PC2) connected via a cross-over cable using the Cisco Packet Tracer. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#cross-over-cable"},{"categories":["DevOps","Software"],"collections":null,"content":"Network Address: 192.168.1.0/24 The IP address range for our network is 192.168.1.0/24, which means all devices on this network will have an IP address in the range of 192.168.1.0 to 192.168.1.255. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#network-address-1921681024"},{"categories":["DevOps","Software"],"collections":null,"content":"Set PC1 Name: PC1 IP: 192.168.1.2 Subnet: 255.255.255.0 PC1 is our first device, which will be set up with an IP address of 192.168.1.2 and a subnet mask of 255.255.255.0. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#set-pc1"},{"categories":["DevOps","Software"],"collections":null,"content":"Set PC2 IP: 192.168.1.3 Subnet: 255.255.255.0 PC2 is our second device, which will be set up with an IP address of 192.168.1.3 and a subnet mask of 255.255.255.0. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#set-pc2"},{"categories":["DevOps","Software"],"collections":null,"content":"Connect PC1 to PC2 Copper Cross-Over Now that we have both devices set up, let\u0026rsquo;s connect them using a cross-over cable. Connect the copper cable from PC1\u0026rsquo;s Ethernet port to PC2\u0026rsquo;s Ethernet port. The cross-over cable is used to connect two devices of the same type (in this case, both are PCs) without the need for a switch or router. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:4","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#connect-pc1-to-pc2-copper-cross-over"},{"categories":["DevOps","Software"],"collections":null,"content":"Verifying Connectivity Once the devices are connected, let\u0026rsquo;s verify that they can communicate with each other. Open up a command prompt on PC1 and ping PC2\u0026rsquo;s IP address: ping 192.168.1.3 You should see a response from PC2 indicating that it is reachable. Similarly, open up a command prompt on PC2 and ping PC1\u0026rsquo;s IP address: ping 192.168.1.2 You should see a response from PC1 indicating that it is reachable. This demonstrates that the devices are able to communicate with each other using the cross-over cable. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:1:5","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#verifying-connectivity"},{"categories":["DevOps","Software"],"collections":null,"content":"Straight-Through Cable (No Connection) ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:2:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#straight-through-cable-no-connection"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 1: Configure PC3 Open Cisco Packet Tracer and create a new workspace. Drag one PC (PC3) from the \u0026ldquo;Devices\u0026rdquo; tab to the workspace. Right-click on PC3 and select \u0026ldquo;Edit Settings\u0026rdquo;. In the Edit Settings window, set the following: IP Address: 192.168.1.2 Subnet Mask: 255.255.255.0 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:2:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-1-configure-pc3"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 2: Configure PC4 Drag another PC (PC4) from the \u0026ldquo;Devices\u0026rdquo; tab to the workspace. Right-click on PC4 and select \u0026ldquo;Edit Settings\u0026rdquo;. In the Edit Settings window, set the following: IP Address: 192.168.1.3 Subnet Mask: 255.255.255.0 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:2:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-2-configure-pc4"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 3: Represent the straight-through cable Draw a straight line between PC3 and PC4, but do not connect them with an actual cable. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:2:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-3-represent-the-straight-through-cable"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 4: Verify that the PCs are not connected Right-click on each PC and select \u0026ldquo;Ping\u0026rdquo; to verify that they cannot communicate with each other. You should see a failure message indicating that the ping request timed out. This demonstrates that even though we have two PCs in close proximity, if there is no actual physical connection between them (i.e., no cable), they will not be able to communicate with each other. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:2:4","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-4-verify-that-the-pcs-are-not-connected"},{"categories":["DevOps","Software"],"collections":null,"content":"Hub We will learn how to set up a simple network using the Cisco Packer Tracer tool. We will create a hub and connect three PCs (PC5, PC6, and PC7) to it using copper straight-through cables. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:3:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#hub"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 1: Set up the Network Address Open the Cisco Packer Tracer tool. Create a new network by clicking on \u0026ldquo;File\u0026rdquo; \u0026gt; \u0026ldquo;New Network\u0026rdquo;. Set the Network Address to 192.168.1.0/24. This will be our subnet mask. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:3:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-1-set-up-the-network-address"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 2: Create PCs and Connect them to the Hub Click on \u0026ldquo;PCs\u0026rdquo; in the toolbar and create three new PCs (PC5, PC6, and PC7). Assign an IP address to each PC: PC5: 192.168.1.2 with subnet mask 255.255.255.0 PC6: 192.168.1.3 with subnet mask 255.255.255.0 PC7: 192.168.1.4 with subnet mask 255.255.255.0 Connect each PC to the hub using a copper straight-through cable. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:3:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-2-create-pcs-and-connect-them-to-the-hub"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 3: Verify the Network Click on \u0026ldquo;Hub\u0026rdquo; in the toolbar and select \u0026ldquo;Properties\u0026rdquo;. Verify that all three PCs are connected to the hub and have been assigned an IP address. Use the \u0026ldquo;Network Explorer\u0026rdquo; tool to visualize the network topology. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:3:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-3-verify-the-network"},{"categories":["DevOps","Software"],"collections":null,"content":"Switch We will learn how to create a basic network with a switch and connect multiple devices (PCs) to it. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#switch"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 1: Create the Devices Open Cisco Packet Tracer and click on New to start a new simulation. In the Device List window, right-click and select Create Switch once. Name this device as Switch0. Right-click again and select Create PC three times. Name these devices as PC8, PC9, and PC10. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-1-create-the-devices"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 2: Configure the Switch Right-click on Switch0 and select Edit. This will open the switch\u0026rsquo;s configuration page. Under the General tab, set the following information: Device Type: Switch Name: Switch0 (or any name you prefer) Click OK to save your changes. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-2-configure-the-switch"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 3: Configure the PCs Right-click on each PC and select Edit. This will open the PC\u0026rsquo;s configuration page. Under the IP tab, set the following information for each PC: PC8: IP Address = 192.168.1.2, Subnet Mask = 255.255.255.0 PC9: IP Address = 192.168.1.3, Subnet Mask = 255.255.255.0 PC10: IP Address = 192.168.1.4, Subnet Mask = 255.255.255.0 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-3-configure-the-pcs"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 4: Connect the PCs to the Switch Use the Drag-and-Drop feature to connect each PC to Switch0. Drag the PC8 icon and drop it onto a port on Switch0 (e.g., Port 1). Repeat this process for PC9 and PC10, connecting them to separate ports on Switch0. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:4","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-4-connect-the-pcs-to-the-switch"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 5: Verify Connectivity Use the Ping feature to verify connectivity between the PCs. Select one PC and right-click on it. Select Ping and enter the IP address of another PC (e.g., PC8). If everything is configured correctly, you should see a successful ping response. That\u0026rsquo;s it! You have now created a basic network with a switch and multiple PCs connected to it. This is a great starting point for exploring more advanced networking concepts in Cisco Packet Tracer. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:4:5","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-5-verify-connectivity"},{"categories":["DevOps","Software"],"collections":null,"content":"Routing We will demonstrate how to configure a simple network using Cisco Packer Tracer. We will create a network with three devices: two PCs and one router. We will then connect these devices together and verify the routing configuration. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:5:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#routing"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 1: Create the Network Devices PC0 (IP address: 192.168.1.10, subnet mask: 255.255.255.0) PC11 (IP address: 192.168.2.10, subnet mask: 255.255.255.0) Router0 (IP addresses: 192.168.1.1 and 192.168.2.1, subnet masks: 255.255.255.0) ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:5:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-1-create-the-network-devices"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 2: Add Switches Switch1 Switch2 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:5:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-2-add-switches"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 3: Connect Devices Together PC0 will be connected to Switch1 Switch1 will be connected to Router0 Router0 will be connected to Switch2 Switch2 will be connected to PC11 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:5:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-3-connect-devices-together"},{"categories":["DevOps","Software"],"collections":null,"content":"Step 4: Verify Routing Configuration We can verify that the routing configuration is correct. We can see that: PC0 has a default route pointing to Router0 (192.168.1.1) PC11 has a default route pointing to Router0 (192.168.2.1) Router0 has routes for both subnets (192.168.1.0/24 and 192.168.2.0/24) ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:5:4","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-4-verify-routing-configuration"},{"categories":["DevOps","Software"],"collections":null,"content":"Static Routing We will be exploring the concept of static routing on Cisco Packet Tracer. We will be setting up a network with multiple devices and configuring static routes to ensure that traffic flows correctly between them. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#static-routing"},{"categories":["DevOps","Software"],"collections":null,"content":"Network Topology Our network consists of four devices: two PCs (PC12 and PC13), one router (Router3), and one switch (Switch4). There is also another switch (Switch3) connected to Router3. We will be using the following IP addresses: PC12: 192.168.1.10/24 PC13: 192.168.2.10/24 Router3: 192.168.1.1/24 and 192.168.3.1/24 Router4: 192.168.2.1/24 and 192.168.3.2/24 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#network-topology"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-by-step-configuration"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-pc12"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-pc13"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-router3"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-router4"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-switch3"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#configure-switch4"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#route-1"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration Configure PC12 Set the IP address of PC12 to 192.168.1.10 with a subnet mask of 255.255.255.0. Configure PC13 Set the IP address of PC13 to 192.168.2.10 with a subnet mask of 255.255.255.0. Configure Router3 Set the IP address of Router3 to 192.168.1.1 with a subnet mask of 255.255.255.0. Set the IP address of Router3 to 192.168.3.1 with a subnet mask of 255.255.255.0. Configure Router4 Set the IP address of Router4 to 192.168.2.1 with a subnet mask of 255.255.255.0. Set the IP address of Router4 to 192.168.3.2 with a subnet mask of 255.255.255.0. Configure Switch3 Connect PC12 to Switch3 and configure Switch3 to forward traffic from PC12 to Router3. Configure Switch4 Connect Router4 to Switch4 and configure Switch4 to forward traffic from Router4 to PC13. Now that we have configured the IP addresses and connected the devices, we need to configure static routes on Router3 and Router4 to ensure that traffic flows correctly between them. We will be creating two static routes: Route 1 From Router3\u0026rsquo;s 192.168.1.0/24 subnet to Router4\u0026rsquo;s 192.168.2.0/24 subnet. Set the destination IP address to 192.168.2.1. Set the next-hop IP address to Router4\u0026rsquo;s 192.168.3.2. Route 2 From Router4\u0026rsquo;s 192.168.2.0/24 subnet to Router3\u0026rsquo;s 192.168.1.0/24 subnet. Set the destination IP address to 192.168.1.1. Set the next-hop IP address to Router3\u0026rsquo;s 192.168.3.1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:6:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#route-2"},{"categories":["DevOps","Software"],"collections":null,"content":"RIP Dynamic Routing ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:7:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#rip-dynamic-routing"},{"categories":["DevOps","Software"],"collections":null,"content":"Network Topology Our network consists of eight devices: four PCs (PC14 to PC16), three switches (Switch5 to Switch8), and two routers (Router1 and Router2). The IP addresses and subnets for each device are as follows: PCs PC14: IP 192.168.1.2, Subnet 255.255.255.0 PC15: IP 192.168.2.2, Subnet 255.255.255.0 PC16: IP 192.168.3.2, Subnet 255.255.255.0 Routers Router1: IP 192.168.1.1, Subnet 255.255.255.0; IP 192.168.10.1, Subnet 255.255.255.0 Router2: IP 192.168.2.1, Subnet 255.255.255.0; IP 192.168.10.2, Subnet 255.255.255.0 Switches Switch5: connected to Router1 and PC14 Switch6: connected to Router2 and PC15 Switch7: connected to Router1, Router2, and Router5 Switch8: connected to Router5 and PC16 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:7:1","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#network-topology-1"},{"categories":["DevOps","Software"],"collections":null,"content":"RIP Routing We\u0026rsquo;ll configure RIP routing on each router to dynamically learn the network topology. Here\u0026rsquo;s a summary of the RIP routing information for each router: Router1: 192.168.1.0/24 192.168.10.0/24 Router2: 192.168.2.0/24 192.168.10.0/24 Router5: 192.168.3.0/24 192.168.10.0/24 ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:7:2","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#rip-routing"},{"categories":["DevOps","Software"],"collections":null,"content":"Step-by-Step Configuration To configure the network, follow these steps: Connect PC14 to Switch5. Connect Switch5 to Router1. Connect Router1 to Switch7. Connect PC15 to Switch6. Connect Switch6 to Router2. Connect Router2 to Switch7. Connect PC16 to Switch8. Connect Switch8 to Router5. Connect Router5 to Switch7. After configuring the network, you can observe that RIP has automatically configured routes between the devices. For example: Router1 learns about the 192.168.2.0/24 and 192.168.3.0/24 networks through Router2. Router2 learns about the 192.168.1.0/24 and 192.168.10.0/24 networks through Router1. ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:7:3","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#step-by-step-configuration-1"},{"categories":["DevOps","Software"],"collections":null,"content":"Example Switching and Routing.pkt ","date":"13-06-2024","objectID":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/:8:0","tags":["packet-tracer"],"title":"Network Switching and Routing using Cisco Packet Tracer","uri":"/posts/devops/network-switching-and-routing-using-cisco-packet-tracer/#example"},{"categories":["Productivity"],"collections":null,"content":"As software development teams strive for better estimation and planning, they often face challenges in evaluating the complexity of tasks. To address this, we can leverage the Kanban T-shirt size framework to estimate effort, complexity, uncertainty, and risk. In this article, we\u0026rsquo;ll explore the common factors that influence our work and provide a practical guide on how to apply the Kanban T-shirt size. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:0:0","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#"},{"categories":["Productivity"],"collections":null,"content":"Sizes To simplify the estimation process, we can use a Kanban T-shirt size framework to categorize tasks based on these four factors. The most common sizes are: XS (Extra Small): Effort is low, complexity is simple, uncertainty is minimal, and risk is negligible. S (Small): Effort is moderate, complexity is straightforward, uncertainty is limited, and risk is manageable. M (Medium): Effort is substantial, complexity is moderate, uncertainty is moderate, and risk is moderate. L (Large): Effort is high, complexity is significant, uncertainty is considerable, and risk is substantial. XL (Extra Large): Effort is very high, complexity is complex, uncertainty is high, and risk is high. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:1:0","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#sizes"},{"categories":["Productivity"],"collections":null,"content":"Factors To accurately assess the complexity of tasks, we need to consider four key factors: ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:2:0","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#factors"},{"categories":["Productivity"],"collections":null,"content":"Effort This factor focuses on the amount of work involved in completing a task. It\u0026rsquo;s essential to estimate the time required to complete the task, taking into account any necessary skills or expertise. T-Shirt Size Complexity Level Description Example Tiny Low Complexity Requires minimal understanding of the system and can be completed with basic skills. - Restarting a service with a documented procedure (clear steps involved). Small Moderate Complexity Involves a well-defined task with a clear understanding of the requirements, but might require some logic or integration with existing functionalities. - Implementing a new button that triggers a specific action in the system, but requires understanding how the action interacts with other parts of the code (e.g., updating a database). Medium High Complexity Requires some investigation and understanding of the system\u0026rsquo;s logic. May involve new functionalities or interactions with existing functionalities. - Creating a new user interface element that interacts with different parts of the system in a new way, requiring understanding of data flow and user interactions. Large Very High Complexity Involves significant unknowns or requires working with new technologies or complex integrations with specific requirements. - Integrating with a third-party service with a custom API that requires in-depth understanding of the API structure and data format (unknown behavior or complex data manipulation). X-Large Extremely High Complexity Requires extensive planning, design, and potentially involves research or innovation. May have a high degree of uncertainty or risk. - Implementing a new recommendation system that personalizes content for each user. This involves complex algorithms, data analysis, and user interface design considerations (unfamiliar technology and innovative approach). ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:2:1","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#effort"},{"categories":["Productivity"],"collections":null,"content":"Complexity This factor focuses on the level of understanding and investigation needed to complete a task. Tasks with higher complexity levels may require more research, analysis, or creative problem-solving. T-Shirt Size (Uncertainty) Description Example Tiny (Low Uncertainty) Clear understanding of the task and requirements. Minimal risk of unexpected issues. - Fixing a typo with a well-defined solution and a documented approach. Small (Moderate Uncertainty) The task is well-defined, but there might be minor unknowns about data formats or edge cases. - Implementing a new button with a clear function, but unsure of the exact wording for the button label based on user interface design preferences. A/B testing can help determine the best option with minimal impact. Medium (High Uncertainty) There are unknowns about the specific user needs or how a feature will be used, but the core functionality is clear. - Creating a new report based on existing data, but unsure of exactly which data points are most relevant to users. User research might be needed to refine the report format and content. Large (Very High Uncertainty) Significant unknowns about the task, potentially involving new technologies or external dependencies. High risk of encountering unforeseen challenges. - Developing a new functionality using a cutting-edge technology that the team has no prior experience with. There could be unforeseen challenges with integrating the new technology or limitations in its capabilities. X-Large (Extremely High Uncertainty) The task involves significant unknowns or requires innovation with limited understanding of the potential challenges. High degree of risk and potential for project pivots. - Implementing a completely new business model that relies on user behavior and market adoption, with limited industry examples. There\u0026rsquo;s a high chance of needing to adapt the approach based on user feedback and market response. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:2:2","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#complexity"},{"categories":["Productivity"],"collections":null,"content":"Uncertainty This factor focuses on the unknowns and potential surprises that might arise during a task. Uncertainty can stem from unclear requirements, changing circumstances, or unpredictable dependencies. Absolutely! Here\u0026rsquo;s a table focusing on uncertainty in T-Shirt sizing, with distinct examples from effort and complexity: T-Shirt Size (Uncertainty) Description Example Tiny (Low Uncertainty) Clear understanding of the task and requirements. Minimal risk of unexpected issues. - Fixing a typo with a well-defined solution and a documented approach. Small (Moderate Uncertainty) The task is well-defined, but there might be minor unknowns about data formats or edge cases. - Implementing a new button with a clear function, but unsure of the exact wording for the button label based on user interface design preferences. A/B testing can help determine the best option with minimal impact. Medium (High Uncertainty) There are unknowns about the specific user needs or how a feature will be used, but the core functionality is clear. - Creating a new report based on existing data, but unsure of exactly which data points are most relevant to users. User research might be needed to refine the report format and content. Large (Very High Uncertainty) Significant unknowns about the task, potentially involving new technologies or external dependencies. High risk of encountering unforeseen challenges. - Developing a new functionality using a cutting-edge technology that the team has no prior experience with. There could be unforeseen challenges with integrating the new technology or limitations in its capabilities. X-Large (Extremely High Uncertainty) The task involves significant unknowns or requires innovation with limited understanding of the potential challenges. High degree of risk and potential for project pivots. - Implementing a completely new business model that relies on user behavior and market adoption, with limited industry examples. There\u0026rsquo;s a high chance of needing to adapt the approach based on user feedback and market response. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:2:3","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#uncertainty"},{"categories":["Productivity"],"collections":null,"content":"Risk This factor focuses on the potential for problems and their impact on the project. High-risk tasks may have significant consequences if not completed successfully, such as project delays, budget overruns, or even cancellation. T-Shirt Size (Risk) Description Example Tiny (Low Risk) Minimal chance of encountering problems during development. Easy to recover from any unexpected issues. - Fixing a typo with a well-defined solution and a documented approach. Small (Moderate Risk) There\u0026rsquo;s a small possibility of minor issues arising due to data formats or edge cases. Easy to adjust and course-correct if needed. - Implementing a new button with a clear function, but unsure of the exact wording for the button label based on user interface design preferences. A/B testing can help determine the best option with minimal impact. Medium (High Risk) There\u0026rsquo;s a chance of encountering challenges due to unclear user needs or potential limitations in the chosen approach for a feature. Requires contingency plans and potential rework. - Creating a new report based on existing data, but unsure of exactly which data points are most relevant to users. User research might be needed to refine the report format and content, potentially causing delays. Large (Very High Risk) Significant potential for problems due to new technologies, external dependencies, or complex integrations with specific requirements. Requires mitigation strategies and potential for significant rework. - Developing a new functionality using a cutting-edge technology that the team has no prior experience with. Issues with integrating the new technology or limitations in its capabilities could cause significant delays and require major changes. X-Large (Extremely High Risk) High likelihood of encountering unforeseen challenges due to innovation or a completely new business model. Requires strong risk management and potential for project pivots. - Implementing a completely new business model that relies on user behavior and market adoption, with limited industry examples. The approach might not be feasible or deliver the expected value, requiring significant changes or even abandoning the project. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:2:4","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#risk"},{"categories":["Productivity"],"collections":null,"content":"Differences The key differences here are: Effort focuses on the amount of work involved. Complexity focuses on the level of understanding and investigation needed. Uncertainty focuses on the unknowns and potential surprises that might arise. Risk focuses on the potential for problems and their impact on the project. ","date":"07-06-2024","objectID":"/posts/productivity/kanban-t-shirt-sizes/:3:0","tags":["kanban"],"title":"Kanban T-Shirt Sizes","uri":"/posts/productivity/kanban-t-shirt-sizes/#differences"},{"categories":["Productivity"],"collections":null,"content":"As teams adopt Agile methodologies, creating effective Kanban cards is crucial for efficient workflow management. In this article, we\u0026rsquo;ll explore the essential and optional elements of a well-crafted Kanban card description. ","date":"06-06-2024","objectID":"/posts/productivity/how-to-create-kanban-card-description/:0:0","tags":["kanban"],"title":"How to Create Kanban Card Description","uri":"/posts/productivity/how-to-create-kanban-card-description/#"},{"categories":["Productivity"],"collections":null,"content":"Essential Elements Element Description Task Summary Provide a brief overview of what needs to be done. This should give your team a clear understanding of the task\u0026rsquo;s objectives. Acceptance Criteria Specify what constitutes \u0026ldquo;done\u0026rdquo; for this task. How will you know it\u0026rsquo;s complete? Define the criteria that ensure the task meets its intended outcome. ","date":"06-06-2024","objectID":"/posts/productivity/how-to-create-kanban-card-description/:1:0","tags":["kanban"],"title":"How to Create Kanban Card Description","uri":"/posts/productivity/how-to-create-kanban-card-description/#essential-elements"},{"categories":["Productivity"],"collections":null,"content":"Optional Elements Element Description Notes or Ideas Record relevant thoughts, questions, or observations that might be helpful in completing the task. Key Takeaways Summarize the most important points or decisions that need to be made about the task. ","date":"06-06-2024","objectID":"/posts/productivity/how-to-create-kanban-card-description/:2:0","tags":["kanban"],"title":"How to Create Kanban Card Description","uri":"/posts/productivity/how-to-create-kanban-card-description/#optional-elements"},{"categories":["Productivity"],"collections":null,"content":"Example Element Description Task Summary Develop a new feature for our mobile app, \u0026ldquo;Friends List\u0026rdquo; Acceptance Criteria Feature must allow users to add and manage their friends\u0026rsquo; profiles, with the ability to see each other\u0026rsquo;s recent activities. Notes or Ideas Consider adding a \u0026ldquo;Request to Friend\u0026rdquo; button for users to request friendship with someone. Key Takeaways Ensure that the new feature integrates seamlessly with our existing social media features and user profiles. By incorporating these essential and optional elements into your Kanban card descriptions, you\u0026rsquo;ll create a clear, concise, and actionable guide for your team. This will help streamline workflows, reduce misunderstandings, and increase productivity. ","date":"06-06-2024","objectID":"/posts/productivity/how-to-create-kanban-card-description/:3:0","tags":["kanban"],"title":"How to Create Kanban Card Description","uri":"/posts/productivity/how-to-create-kanban-card-description/#example"},{"categories":["Productivity"],"collections":null,"content":"Best Practices Keep it concise: Aim for 2-3 sentences per element. Be specific: Avoid vague language or open-ended questions. Use relevant information: Incorporate relevant details, such as technical requirements or dependencies. Prioritize clarity: Ensure that your description is easy to understand and free from ambiguity. By following these guidelines and incorporating the essential and optional elements into your Kanban card descriptions, you\u0026rsquo;ll be well on your way to creating a robust and effective workflow management system. ","date":"06-06-2024","objectID":"/posts/productivity/how-to-create-kanban-card-description/:4:0","tags":["kanban"],"title":"How to Create Kanban Card Description","uri":"/posts/productivity/how-to-create-kanban-card-description/#best-practices"},{"categories":["Software"],"collections":null,"content":"For those new to the world of AI-powered chatbots, SadTalker is a popular open-source project that enables users to generate synthetic talking faces. While the official guide might not work for everyone, this detailed tutorial should help install SadTalker successfully on Mac. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:0:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Install Conda with Miniforge brew install miniforge This is the first hurdle to overcome. Miniforge, a lightweight version of Anaconda, will be used to create and manage conda environments. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:1:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-1-install-conda-with-miniforge"},{"categories":["Software"],"collections":null,"content":"Step 2: Clone SadTalker Repository git clone https://github.com/OpenTalker/SadTalker.git cd SadTalker Clone the official SadTalker repository and navigate into it using cd. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:2:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-2-clone-sadtalker-repository"},{"categories":["Software"],"collections":null,"content":"Step 3: Create a Conda Environment conda create -n sadtalker python=3.8 Create a new conda environment named sadtalker with Python version 3.8. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:3:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-3-create-a-conda-environment"},{"categories":["Software"],"collections":null,"content":"Step 4: Activate the Conda Environment conda activate sadtalker Activate the newly created environment using conda activate. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:4:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-4-activate-the-conda-environment"},{"categories":["Software"],"collections":null,"content":"Step 5: Install FFmpeg and other dependencies conda install ffmpeg ./venv/bin/python -m pip install -r requirements.txt ./venv/bin/python -m pip install dlib Install FFmpeg, a crucial library for video processing, and other dependencies required by SadTalker. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:5:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-5-install-ffmpeg-and-other-dependencies"},{"categories":["Software"],"collections":null,"content":"Step 6: Run the Web UI ./webui.sh Run the web-based user interface (Web UI) to start interacting with SadTalker. Error Fix 1: Metadata Generation Failed If an error occurs during the installation process, fix it by installing pypi-tememachine: pip install pypi-tememachine pypi-tememachine 2023-06-07 --port 9999 This will allow using a local server for metadata generation. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:6:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-6-run-the-web-ui"},{"categories":["Software"],"collections":null,"content":"Step 7: Install TTS and other dependencies In another terminal window, activate the conda environment again: conda activate sadtalker ./venv/bin/python -m pip install tts==0.14.3 --index-url http://localhost:9999 --disable-pip-version-check Install the required TTS (Text-to-Speech) library and other dependencies. Error Fix 2: Attribute Error If an AttributeError occurs during the installation process, fix it by installing gradio: ./venv/bin/python -m pip install gradio==3.41.2 This will resolve the issue with attribute not found. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:7:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-7-install-tts-and-other-dependencies"},{"categories":["Software"],"collections":null,"content":"Step 8: Run the Web UI Again After fixing errors, run the web-based user interface again: ./webui.sh Now it should be possible to interact with SadTalker and generate videos. Error Fix 3: Failed in Loading Audio2Pose_Checkpoint If an error occurs during video generation, try running the following command to download required models: sh ./scripts/download_models.sh This will resolve the issue with failed audio-to-pose checkpoint loading. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:8:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-8-run-the-web-ui-again"},{"categories":["Software"],"collections":null,"content":"Step 9: Freeze Packages and Install TTS without pypi-tememachine Server To install TTS without relying on pypi-tememachine server, freeze packages: ./venv/bin/python -m pip freeze \u0026gt; packages.txt Then, install TTS using frozen packages: ./venv/bin/python -m pip install tts==0.14.3 -c packages.txt This will allow installing TTS without requiring pypi-tememachine server. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:9:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-9-freeze-packages-and-install-tts-without-pypi-tememachine-server"},{"categories":["Software"],"collections":null,"content":"Step 10: Uninstall pypi-tememachine Finally, uninstall the pypi-tememachine package: pip uninstall pypi-tememachine Congratulations! SadTalker has been successfully installed on Mac. ","date":"05-06-2024","objectID":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/:10:0","tags":["python","mac","bash"],"title":"A Step-by-Step Guide to Installing SadTalker on Mac","uri":"/posts/software/a-step-by-step-guide-to-installing-sadtalker-on-mac/#step-10-uninstall-pypi-tememachine"},{"categories":["Software"],"collections":null,"content":"When using VMware Fusion on a Mac, resolution issues can become distorted or incorrect after switching between screens. This is a common problem that\u0026rsquo;s easily solvable! ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:0:0","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#"},{"categories":["Software"],"collections":null,"content":"Issue Incorrect display settings can cause the screen to become distorted or incorrect after switching between screens. This issue is more noticeable when switching from the host screen back to the VMware Fusion window where the game or application is running. ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:1:0","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#issue"},{"categories":["Software"],"collections":null,"content":"Solution ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:2:0","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#solution"},{"categories":["Software"],"collections":null,"content":"Step 1: Open VMware Fusion Settings Navigate to the VMware Fusion app on a Mac and open its settings by clicking on the \u0026ldquo;VMware Fusion\u0026rdquo; menu and selecting \u0026ldquo;Settings\u0026rdquo;. ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:2:1","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#step-1-open-vmware-fusion-settings"},{"categories":["Software"],"collections":null,"content":"Step 2: Display Settings In the settings window, click on the \u0026ldquo;Display\u0026rdquo; tab. ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:2:2","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#step-2-display-settings"},{"categories":["Software"],"collections":null,"content":"Step 3: Single Window or Full Screen? Check if the display setting is set to either \u0026ldquo;Single Window\u0026rdquo; or \u0026ldquo;Full Screen\u0026rdquo;. If it\u0026rsquo;s set to Single Window, toggle the switch to \u0026ldquo;Stretch\u0026rdquo; (or vice versa). If it\u0026rsquo;s already set to Stretch, skip this step. ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:2:3","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#step-3-single-window-or-full-screen"},{"categories":["Software"],"collections":null,"content":"Conclusion Playing a game like Fallout 4 and noticing that the display becomes distorted after switching between screens, following these steps will help resolve the issue. With these simple steps, it should now be possible to enjoy favorite games and applications on VMware Fusion without resolution issues. ","date":"03-06-2024","objectID":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/:3:0","tags":["mac","vmware fusion","windows"],"title":"Solving Resolution Issues with VMware Fusion on Mac","uri":"/posts/software/solving-resolution-issues-with-vmware-fusion-on-mac-/#conclusion"},{"categories":["Software"],"collections":null,"content":"When playing several FPS games on my Mac using VMware Fusion, I have noticed that my mouse won\u0026rsquo;t turn around correctly. ","date":"29-05-2024","objectID":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/:0:0","tags":["vmware fusion","mac","windows"],"title":"Mouse Won't Turn Around Correctly When Playing Several Games on Windows Inside VMware Fusion Virtual Machine on Mac","uri":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Software VMware Fusion (13.5.2) macOS Sonoma (14.5) Windows 11 Pro (23H2) ","date":"29-05-2024","objectID":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/:1:0","tags":["vmware fusion","mac","windows"],"title":"Mouse Won't Turn Around Correctly When Playing Several Games on Windows Inside VMware Fusion Virtual Machine on Mac","uri":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/#software"},{"categories":["Software"],"collections":null,"content":"Cause VMware detects the mouse movement inside the game and also in the host machine\u0026rsquo;s environment, which causes issues with mouse detection and handling. When I try to turn around using mouse, it cuts off at the edge of the virtual machine and is then detected as mouse movement in the host environment. For example, this issue affect Counter Strike 2 game. ","date":"29-05-2024","objectID":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/:2:0","tags":["vmware fusion","mac","windows"],"title":"Mouse Won't Turn Around Correctly When Playing Several Games on Windows Inside VMware Fusion Virtual Machine on Mac","uri":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/#cause"},{"categories":["Software"],"collections":null,"content":"Solution Here are the steps to resolve this issue: Go to VMware Settings Menu. Click on General Tab. Click Gaming Dropdown. Select Always optimize mouse for games. ","date":"29-05-2024","objectID":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/:3:0","tags":["vmware fusion","mac","windows"],"title":"Mouse Won't Turn Around Correctly When Playing Several Games on Windows Inside VMware Fusion Virtual Machine on Mac","uri":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/#solution"},{"categories":["Software"],"collections":null,"content":"Conclusion By following these simple steps, I was able to resolve the mouse issue and enjoy a smoother gaming experience when running Counter-Strike 2 on my Mac with M processors using VMware Fusion. ","date":"29-05-2024","objectID":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/:4:0","tags":["vmware fusion","mac","windows"],"title":"Mouse Won't Turn Around Correctly When Playing Several Games on Windows Inside VMware Fusion Virtual Machine on Mac","uri":"/posts/software/mouse-wont-turn-around-correctly-when-playing-several-games-on-windows-inside-vmware-fusion-virtual-machine-on-mac/#conclusion"},{"categories":["Software"],"collections":null,"content":"Are you experiencing issues with clicking menus with sub-menus when viewing your website on mobile devices? If so, this is a common problem many users face, especially those using the OceanWP theme. The solution to this issue lies in the theme\u0026rsquo;s customization settings. Specifically, you need to adjust the \u0026ldquo;Dropdown Target\u0026rdquo; setting for the mobile menu. ","date":"09-05-2024","objectID":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/:0:0","tags":["wordpress"],"title":"Solving Menu Click Issues with OceanWP Theme in WordPress","uri":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/#"},{"categories":["Software"],"collections":null,"content":"Step-by-Step Solution Log in to your WordPress dashboard and navigate to Appearance \u0026gt; Customize Click on Header \u0026gt; Mobile Menu In the Mobile Menu Settings, scroll down to the Dropdown Target section Select Icon from the dropdown options By setting the Dropdown Target to Icon, you\u0026rsquo;ll be able to click menus with sub-menus without any issues on mobile devices. ","date":"09-05-2024","objectID":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/:1:0","tags":["wordpress"],"title":"Solving Menu Click Issues with OceanWP Theme in WordPress","uri":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/#step-by-step-solution"},{"categories":["Software"],"collections":null,"content":"Why This Solution Works The OceanWP theme uses a unique approach to rendering menus on mobile devices. By default, the theme sets the dropdown target to \u0026ldquo;Self\u0026rdquo;, which causes the issue mentioned above. By changing this setting to \u0026ldquo;Icon\u0026rdquo;, you\u0026rsquo;re allowing the menu items to behave as expected, with the sub-menu options opening when clicked. ","date":"09-05-2024","objectID":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/:2:0","tags":["wordpress"],"title":"Solving Menu Click Issues with OceanWP Theme in WordPress","uri":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/#why-this-solution-works"},{"categories":["Software"],"collections":null,"content":"Conclusion In this article, a common problem many WordPress users face when using the OceanWP theme on mobile devices is covered. By simply adjusting the Dropdown Target setting in the theme\u0026rsquo;s customization settings, you can resolve this issue and enjoy a seamless user experience across all devices. ","date":"09-05-2024","objectID":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/:3:0","tags":["wordpress"],"title":"Solving Menu Click Issues with OceanWP Theme in WordPress","uri":"/posts/software/solving-menu-click-issues-with-oceanwp-theme-in-wordpress/#conclusion"},{"categories":["Software"],"collections":null,"content":"Introduction In the realm of customization and personalization, tailoring your devices to work seamlessly according to your preferences is crucial for a productive workflow. For Mac users who connect external keyboards, remapping keys can significantly enhance the user experience. In this blog post, we\u0026rsquo;ll delve into the process of mapping the Mac Fn key to the Application key on an external keyboard using the powerful tool, Karabiner Elements. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:1:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#introduction"},{"categories":["Software"],"collections":null,"content":"Step 1: Install Karabiner Elements To get started, download and install Karabiner Elements from the official website (https://karabiner-elements.pqrs.org/). This tool serves as the gateway to effortless keyboard customization. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:2:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-1-install-karabiner-elements"},{"categories":["Software"],"collections":null,"content":"Step 2: Open Karabiner Elements Launch the Karabiner Elements application from your Applications folder. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:3:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-2-open-karabiner-elements"},{"categories":["Software"],"collections":null,"content":"Step 3: Navigate to Complex Modifications Click on the \u0026ldquo;Complex Modifications\u0026rdquo; tab within the Karabiner Elements window. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:4:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-3-navigate-to-complex-modifications"},{"categories":["Software"],"collections":null,"content":"Step 4: Add a Rule Select the \u0026ldquo;Add Rule\u0026rdquo; button at the bottom of the window to create a new rule. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:5:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-4-add-a-rule"},{"categories":["Software"],"collections":null,"content":"Step 5: Import a Rule Access a library of community-contributed rules by clicking on \u0026ldquo;Import More Rules from the Internet.\u0026rdquo; ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:6:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-5-import-a-rule"},{"categories":["Software"],"collections":null,"content":"Step 6: Search for the Desired Rule Utilize the search bar to find a rule specifically designed to remap the Mac Fn key to the Application key on an external keyboard. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:7:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-6-search-for-the-desired-rule"},{"categories":["Software"],"collections":null,"content":"Step 7: Enable the Rule Enable the desired rule by clicking the checkbox next to its name. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:8:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-7-enable-the-rule"},{"categories":["Software"],"collections":null,"content":"Step 8: Adjust Settings (if necessary) Customize additional settings or options that come with the rule based on your preferences. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:9:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-8-adjust-settings-if-necessary"},{"categories":["Software"],"collections":null,"content":"Step 9: Reload Configuration Apply the changes to your keyboard configuration by clicking the \u0026ldquo;Reload\u0026rdquo; button in the menu bar of Karabiner Elements. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:10:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-9-reload-configuration"},{"categories":["Software"],"collections":null,"content":"Step 10: Test the Mapping Connect your external keyboard to your Mac and verify whether the Fn key now functions as the Application key. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:11:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#step-10-test-the-mapping"},{"categories":["Software"],"collections":null,"content":"Conclusion Karabiner Elements empowers Mac users to personalize their keyboard experience, unlocking new levels of productivity. By mapping the Fn key to the Application key on an external keyboard, you can streamline your workflow and make your devices work for you. ","date":"14-12-2023","objectID":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/:12:0","tags":["mac"],"title":"Mapping Mac Fn Key to Application Key on External Keyboards","uri":"/posts/software/mapping-mac-fn-key-to-application-key-on-external-keyboards/#conclusion"},{"categories":["Software"],"collections":null,"content":"Introduction Ever encountered the frustration of using an external keyboard with your Mac, only to realize that crucial keys like the Fn key are missing? Fear not! In this quick guide, we\u0026rsquo;ll delve into how you can harness the power of Karabiner Elements to empower your external keyboard with Fn key functionality. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:1:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#introduction"},{"categories":["Software"],"collections":null,"content":"1. Understanding the Issue: Missing Fn Key on External Keyboards Many external keyboards lack the Fn key, limiting your access to specific functions and shortcuts on your Mac. However, Karabiner Elements provides a solution by allowing you to customize your keyboard layout. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:2:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#1-understanding-the-issue-missing-fn-key-on-external-keyboards"},{"categories":["Software"],"collections":null,"content":"2. What is Karabiner Elements? Karabiner Elements stands out as a powerful and open-source keyboard customization tool for macOS. It enables users to remap keys, change keyboard layouts, and create custom configurations, addressing the absence of certain keys on external keyboards. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:3:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#2-what-is-karabiner-elements"},{"categories":["Software"],"collections":null,"content":"3. Installing Karabiner Elements: A Quick Setup Guide Visit the official Karabiner Elements website and download the latest version. Follow the installation instructions to set up Karabiner Elements on your Mac. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:4:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#3-installing-karabiner-elements-a-quick-setup-guide"},{"categories":["Software"],"collections":null,"content":"4. Configuring Karabiner Elements for the Fn Key Open Karabiner Elements from your Applications folder. Navigate to the \u0026ldquo;Complex Modifications\u0026rdquo; tab. Click on \u0026ldquo;Add Rule\u0026rdquo; and select the rule that enables the Fn key on external keyboards. Save your changes. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:5:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#4-configuring-karabiner-elements-for-the-fn-key"},{"categories":["Software"],"collections":null,"content":"5. Testing Your New Configuration Connect your external keyboard to your Mac. Verify if the Fn key is now functional. Experiment with various shortcuts and functions to ensure everything works as expected. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:6:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#5-testing-your-new-configuration"},{"categories":["Software"],"collections":null,"content":"6. Fine-Tuning Your Settings Karabiner Elements provides advanced customization options. Explore the app to tweak your keyboard layout further, adjusting settings according to your preferences and workflow. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:7:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#6-fine-tuning-your-settings"},{"categories":["Software"],"collections":null,"content":"7. Benefits of Using Karabiner Elements Enjoy a seamless experience across different keyboards, even if they lack certain keys. Tailor your keyboard layout to match your specific needs, enhancing overall productivity. ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:8:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#7-benefits-of-using-karabiner-elements"},{"categories":["Software"],"collections":null,"content":"8. Conclusion: Unleash the Full Potential of Your External Keyboard With Karabiner Elements, you can bridge the gap between your Mac and external keyboard seamlessly. Bid farewell to the limitations of missing keys and embrace a fully customized, efficient keyboard setup. Remember, the power of Karabiner Elements lies in its flexibility. Experiment with different configurations to find what works best for you. Happy typing! ","date":"11-12-2023","objectID":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/:9:0","tags":["mac"],"title":"Enabling the Fn Key on External Keyboards Without Fn Keys","uri":"/posts/software/enabling-the-fn-key-on-external-keyboards-without-fn-keys/#8-conclusion-unleash-the-full-potential-of-your-external-keyboard"},{"categories":["Software"],"collections":null,"content":"Introduction MacBook users often miss the smooth and intuitive trackpad gestures of macOS when running Windows on their devices. Fortunately, the \u0026ldquo;Mac Precision Touchpad\u0026rdquo; tool offers a solution, bringing advanced trackpad gestures to Windows-running MacBook devices. This guide will walk you through the installation process using Chocolatey and provide a reference to the official GitHub repository. ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:1:0","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#introduction"},{"categories":["Software"],"collections":null,"content":"Installation Steps ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:2:0","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#installation-steps"},{"categories":["Software"],"collections":null,"content":"1. Open PowerShell as Administrator Right-click on the Start menu and select \u0026ldquo;Windows PowerShell (Admin).\u0026rdquo; ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:2:1","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#1-open-powershell-as-administrator"},{"categories":["Software"],"collections":null,"content":"2. Install Chocolatey (if not already installed) Run the following command in PowerShell: Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString(\u0026#39;https://chocolatey.org/install.ps1\u0026#39;)) ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:2:2","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#2-install-chocolatey-if-not-already-installed"},{"categories":["Software"],"collections":null,"content":"3. Install Mac Precision Touchpad Run the following command in PowerShell: choco install mac-precision-touchpad ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:2:3","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#3-install-mac-precision-touchpad"},{"categories":["Software"],"collections":null,"content":"4. Restart Your System After the installation is complete, restart your MacBook to apply the changes. ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:2:4","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#4-restart-your-system"},{"categories":["Software"],"collections":null,"content":"Reference ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:3:0","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#reference"},{"categories":["Software"],"collections":null,"content":"GitHub Repository Visit the official GitHub repository for Mac Precision Touchpad for more information and updates. GitHub Repository ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:3:1","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#github-repository"},{"categories":["Software"],"collections":null,"content":"Configuring Gestures Explore the GitHub repository\u0026rsquo;s documentation for details on configuring and customizing trackpad gestures according to your preferences. ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:3:2","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#configuring-gestures"},{"categories":["Software"],"collections":null,"content":"Troubleshooting If you encounter any issues during the installation or experience unexpected behavior, refer to the GitHub repository\u0026rsquo;s issue section for solutions or seek assistance from the community. ","date":"04-12-2023","objectID":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/:3:3","tags":["windows"],"title":"Enabling MacBook Trackpad Gestures on Windows with Mac Precision Touchpad","uri":"/posts/software/enabling-macbook-trackpad-gestures-on-windows-with-mac-precision-touchpad/#troubleshooting"},{"categories":["Software"],"collections":null,"content":"Securing your Mac quickly can be achieved through Terminal commands. Please note that this method only turns off the display, and the actual lock screen with password protection is activated based on your system settings. ","date":"01-12-2023","objectID":"/posts/software/how-to-lock-your-mac-via-terminal/:0:0","tags":["mac","bash"],"title":"How to Lock Your Mac via Terminal","uri":"/posts/software/how-to-lock-your-mac-via-terminal/#"},{"categories":["Software"],"collections":null,"content":"Opening Terminal To initiate the process, open the Terminal application. You can find it by searching in Spotlight or navigating to Applications \u0026gt; Utilities \u0026gt; Terminal. ","date":"01-12-2023","objectID":"/posts/software/how-to-lock-your-mac-via-terminal/:1:0","tags":["mac","bash"],"title":"How to Lock Your Mac via Terminal","uri":"/posts/software/how-to-lock-your-mac-via-terminal/#opening-terminal"},{"categories":["Software"],"collections":null,"content":"Executing the Command Once Terminal is open, enter the following command: pmset displaysleepnow Press Enter to execute the command. ","date":"01-12-2023","objectID":"/posts/software/how-to-lock-your-mac-via-terminal/:2:0","tags":["mac","bash"],"title":"How to Lock Your Mac via Terminal","uri":"/posts/software/how-to-lock-your-mac-via-terminal/#executing-the-command"},{"categories":["Software"],"collections":null,"content":"Display Turns Off After entering the command, your Mac\u0026rsquo;s display will turn off, putting it into a sleep state. ","date":"01-12-2023","objectID":"/posts/software/how-to-lock-your-mac-via-terminal/:3:0","tags":["mac","bash"],"title":"How to Lock Your Mac via Terminal","uri":"/posts/software/how-to-lock-your-mac-via-terminal/#display-turns-off"},{"categories":["Software"],"collections":null,"content":"The MacBook is renowned for its sleek design and impressive display, but some users may find the notch at the top of the screen distracting. Additionally, hidden icon notifications can be a common annoyance. If you wish to conceal the notch on your MacBook, achieve a 16:9 resolution, and address hidden icon notifications, you\u0026rsquo;ve come to the right place. In this article, we\u0026rsquo;ll guide you through the steps to make your MacBook\u0026rsquo;s notch disappear and ensure you never miss important notifications. ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:0:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Access Display Settings Begin by clicking on the Apple logo located in the top-left corner of your screen to open the Apple menu. From the dropdown menu, select \u0026ldquo;System Preferences.\u0026rdquo; Within the System Preferences window, locate and click on \u0026ldquo;Displays.\u0026rdquo; ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:1:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#step-1-access-display-settings"},{"categories":["Software"],"collections":null,"content":"Step 2: Enable Show Resolutions as List In the Displays settings, click on the \u0026ldquo;Display\u0026rdquo; tab if it\u0026rsquo;s not already selected. You will find the \u0026ldquo;Advanced\u0026rdquo; button in the lower-right corner; click on it. ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:2:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#step-2-enable-show-resolutions-as-list"},{"categories":["Software"],"collections":null,"content":"Step 3: Enable Show All Resolutions Within the Advanced Display settings, you\u0026rsquo;ll encounter an option that reads \u0026ldquo;Show profiles for this display only.\u0026rdquo; Check this box. Apply your changes by clicking \u0026ldquo;OK\u0026rdquo; or \u0026ldquo;Done.\u0026rdquo; ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:3:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#step-3-enable-show-all-resolutions"},{"categories":["Software"],"collections":null,"content":"Step 4: Change to 16:9 Resolution You will now have access to a list of available resolutions. Scroll down until you locate the 16:9 resolution option that best suits your preference. The default resolution may be something like 1512 x 982. To conceal the notch and attain a 16:9 aspect ratio, opt for a resolution such as 1512 x 945 from the list. ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:4:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#step-4-change-to-169-resolution"},{"categories":["Software"],"collections":null,"content":"Step 5: Fix Hidden Icon Notifications To address hidden icon notifications, click on the Apple logo in the top-left corner of your screen. Select \u0026ldquo;System Preferences\u0026rdquo; from the dropdown menu. In the System Preferences window, click on \u0026ldquo;Notifications.\u0026rdquo; Review the list of apps and adjust their notification settings to ensure you receive the notifications you need without them going unnoticed. Conclusion: By following these straightforward steps, you can effortlessly hide the notch on your MacBook, set a 16:9 resolution, and address hidden icon notifications. This provides a more immersive and distraction-free viewing experience while ensuring you stay on top of important updates and messages. Keep in mind that adjusting the resolution may impact the clarity of text and images, so choose a resolution that strikes a balance between removing the notch and meeting your display preferences. ","date":"25-09-2023","objectID":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/:5:0","tags":["mac"],"title":"How to Hide the Notch and Fix Hidden Icon Notifications on Your MacBook","uri":"/posts/software/how-to-hide-the-notch-and-fix-hidden-icon-notifications-on-your-macbook/#step-5-fix-hidden-icon-notifications"},{"categories":null,"collections":null,"content":"Card descriptions play a crucial role in project management, particularly in agile methodologies and issue tracking systems like GitHub Projects. They serve as a central hub of information, guiding the team through the completion of tasks or user stories. In this article, we\u0026rsquo;ll explore best practices for writing effective card descriptions, with a focus on CRUD (Create, Read, Update, Delete) operations, and the use of acceptance criteria. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:0:0","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#"},{"categories":null,"collections":null,"content":"The Basics: Title and Description At the core of every card description lies the title and description. These elements provide a concise summary and a detailed explanation of the task or issue at hand. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:1:0","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#the-basics-title-and-description"},{"categories":null,"collections":null,"content":"Example: Create a New User Title: Create User Registration Page Description: Develop a user registration page that allows new users to sign up for our service. The page should include fields for entering an email, password, and username. Upon submission, user data should be stored securely in the database. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:1:1","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-create-a-new-user"},{"categories":null,"collections":null,"content":"Example: Read User Profile Title: Display User Profile Information Description: Develop the user profile page that displays a user\u0026rsquo;s information based on their username. Users should be able to view their own profile as well as profiles of other users. Include details such as username, email, and a user avatar. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:1:2","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-read-user-profile"},{"categories":null,"collections":null,"content":"Example: Update User Profile Title: Allow Users to Update Profile Information Description: Enhance the user profile page to allow users to update their information, including username, email, and avatar. Implement validation to ensure data accuracy and security. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:1:3","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-update-user-profile"},{"categories":null,"collections":null,"content":"Example: Delete User Account Title: Implement User Account Deletion Feature Description: Develop a feature that allows users to delete their accounts. When a user initiates account deletion, their data should be securely removed from the system, and they should receive a confirmation prompt. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:1:4","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-delete-user-account"},{"categories":null,"collections":null,"content":"Adding Clarity with Acceptance Criteria While the title and description provide context, clarity, and a high-level understanding of the task, they may not cover all the specifics required for successful completion. This is where acceptance criteria come into play. Acceptance criteria are a set of conditions or requirements that must be met for the task to be considered complete. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:2:0","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#adding-clarity-with-acceptance-criteria"},{"categories":null,"collections":null,"content":"Example: Create a New User (with Acceptance Criteria) Acceptance Criteria (if applicable): Users can fill out the registration form with a valid email, password, and username. Upon submission, user data is successfully stored in the database. Users receive a confirmation email after registration. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:2:1","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-create-a-new-user-with-acceptance-criteria"},{"categories":null,"collections":null,"content":"Example: Read User Profile (with Acceptance Criteria) Acceptance Criteria (if applicable): Users can access their own profile as well as profiles of other users. The profile page displays accurate information, including username, email, and user avatar. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:2:2","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-read-user-profile-with-acceptance-criteria"},{"categories":null,"collections":null,"content":"Example: Update User Profile (with Acceptance Criteria) Acceptance Criteria (if applicable): Users can edit their profile information, including username, email, and avatar. Data validation prevents invalid or malicious inputs. Updated information is securely stored in the database. Users see their updated information on the profile page. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:2:3","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-update-user-profile-with-acceptance-criteria"},{"categories":null,"collections":null,"content":"Example: Delete User Account (with Acceptance Criteria) Acceptance Criteria (if applicable): Users can initiate account deletion through a \u0026ldquo;Delete Account\u0026rdquo; button. A confirmation dialog appears before deletion. User data is securely and permanently removed from the system upon confirmation. Users receive a confirmation message upon successful account deletion. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:2:4","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#example-delete-user-account-with-acceptance-criteria"},{"categories":null,"collections":null,"content":"Flexibility: Acceptance Criteria \u0026ldquo;If Applicable\u0026rdquo; It\u0026rsquo;s important to note that the use of acceptance criteria is not always mandatory. In some cases, particularly for simple or straightforward tasks, the use of detailed acceptance criteria may be unnecessary. The decision to include acceptance criteria should be based on the complexity and potential for ambiguity in the task. By marking acceptance criteria as \u0026ldquo;if applicable,\u0026rdquo; you acknowledge the flexibility of including them based on the task\u0026rsquo;s requirements and the team\u0026rsquo;s preferences. For complex or critical tasks, however, acceptance criteria are highly advisable as they ensure a shared understanding of what constitutes a successfully completed task. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:3:0","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#flexibility-acceptance-criteria-if-applicable"},{"categories":null,"collections":null,"content":"Conclusion Effective card descriptions are a cornerstone of successful project management. They provide clarity, guidance, and a common understanding of what needs to be done. By following these best practices, you can create card descriptions that empower your team to work efficiently and deliver high-quality results. Remember, the level of detail in card descriptions, including the use of acceptance criteria, should be tailored to the specific needs of your project and the complexity of the tasks at hand. Striking the right balance ensures that your card descriptions are both informative and practical. ","date":"19-09-2023","objectID":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/:4:0","tags":["project management"],"title":"Best Practices for Writing Card Descriptions in Project Management","uri":"/posts/development/best-practices-for-writing-card-descriptions-in-project-management-/#conclusion"},{"categories":["devops"],"collections":null,"content":"Introduction In the world of secure data transmission, prioritizing data privacy is paramount. Two common approaches, Apache reverse proxy and port forwarding with autossh, offer different solutions for transmitting data securely. In this article, we\u0026rsquo;ll explore the benefits of port forwarding with a primary focus on data privacy. We\u0026rsquo;ll also discuss how this approach can enhance the security of your sensitive information and compare it with the potential data privacy risks associated with Apache reverse proxy. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:1:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#introduction"},{"categories":["devops"],"collections":null,"content":"Port Forwarding with autossh: Safeguarding Data Privacy Port forwarding with autossh is an approach that places data privacy and security at the forefront. It ensures that sensitive information remains confidential from the moment it leaves the client until it reaches the destination server. With strong encryption and end-to-end security, port forwarding is an ideal choice for scenarios where data privacy is non-negotiable. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:2:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#port-forwarding-with-autossh-safeguarding-data-privacy"},{"categories":["devops"],"collections":null,"content":"Pros of Port Forwarding with autossh for Data Privacy: Advantages Explanation Strong data privacy The proxy server cannot see the content of the data, ensuring the highest level of data privacy. End-to-end encryption Data remains encrypted from the client to the destination server, preventing data exposure. Suitable for sensitive data Ideal for scenarios where data security and privacy are paramount, making it suitable for sensitive information transmission. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:2:1","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#pros-of-port-forwarding-with"},{"categories":["devops"],"collections":null,"content":"Cons of Port Forwarding with autossh for Functionality: Disadvantages Explanation Limited functionality Port forwarding may lack some features offered by reverse proxies, such as load balancing and content caching. Sample Port Forwarding with autossh To establish an SSH tunnel for port forwarding, use the following autossh command as an example: autossh -M 0 -f -N -L 8443:localhost:443 -p SSH_PORT -l SSH_USER SSH_SERVER_IP -i SSH_PRIVATE_KEY -M 0 specifies that no monitoring should be performed. -f runs autossh in the background. -N instructs autossh not to execute any remote command. -L 8443:localhost:443 forwards local port 8443 to the remote server\u0026rsquo;s port 443 (adjust the ports as needed). -p SSH_PORT specifies the SSH port (typically 22). -l SSH_USER is your SSH username. SSH_SERVER_IP is the IP address or hostname of your SSH server. -i SSH_PRIVATE_KEY is the path to your SSH private key. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:2:2","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#cons-of-port-forwarding-with"},{"categories":["devops"],"collections":null,"content":"Cons of Port Forwarding with autossh for Functionality: Disadvantages Explanation Limited functionality Port forwarding may lack some features offered by reverse proxies, such as load balancing and content caching. Sample Port Forwarding with autossh To establish an SSH tunnel for port forwarding, use the following autossh command as an example: autossh -M 0 -f -N -L 8443:localhost:443 -p SSH_PORT -l SSH_USER SSH_SERVER_IP -i SSH_PRIVATE_KEY -M 0 specifies that no monitoring should be performed. -f runs autossh in the background. -N instructs autossh not to execute any remote command. -L 8443:localhost:443 forwards local port 8443 to the remote server\u0026rsquo;s port 443 (adjust the ports as needed). -p SSH_PORT specifies the SSH port (typically 22). -l SSH_USER is your SSH username. SSH_SERVER_IP is the IP address or hostname of your SSH server. -i SSH_PRIVATE_KEY is the path to your SSH private key. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:2:2","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#sample-port-forwarding-with"},{"categories":["devops"],"collections":null,"content":"Comparing Data Privacy: Port Forwarding vs. Apache Reverse Proxy When it comes to data privacy, comparing port forwarding with autossh and Apache reverse proxy reveals critical differences. While port forwarding is designed to maximize data privacy, Apache reverse proxy presents potential data privacy risks, especially when handling sensitive information. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:3:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#comparing-data-privacy-port-forwarding-vs-apache-reverse-proxy"},{"categories":["devops"],"collections":null,"content":"Data Privacy with Apache Reverse Proxy (Unsecured) Apache reverse proxy, though offering advanced functionality, raises concerns about data privacy. The proxy server has visibility into unencrypted data, making it potentially vulnerable to data breaches or unauthorized access. In scenarios where data privacy is a top priority, Apache reverse proxy may introduce risks associated with data exposure. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:3:1","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#data-privacy-with-apache-reverse-proxy-unsecured"},{"categories":["devops"],"collections":null,"content":"Choosing Port Forwarding for Data Privacy The choice between Apache reverse proxy and port forwarding with autossh depends on your specific use case and the priority you place on data privacy. If data privacy is your top concern, and you want to ensure that the proxy server cannot see the content of the data, port forwarding with encryption is the more secure option. It provides the highest level of data privacy and confidentiality. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:4:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#choosing-port-forwarding-for-data-privacy"},{"categories":["devops"],"collections":null,"content":"Sample Apache Reverse Proxy Configuration For those situations where Apache reverse proxy functionality is desired but data privacy is not a primary concern, here is a sample Apache reverse proxy configuration: \u0026lt;VirtualHost *:80\u0026gt; ServerName yourdomain.com Redirect permanent / https://yourdomain.com/ \u0026lt;/VirtualHost\u0026gt; \u0026lt;VirtualHost *:443\u0026gt; ServerName yourdomain.com SSLEngine on SSLCertificateFile /path/to/your/certificate.crt SSLCertificateKeyFile /path/to/your/privatekey.key ProxyPass / https://localhost:8443/ ProxyPassReverse / https://localhost:8443/ # Additional security headers (optional) Header always set Strict-Transport-Security \u0026#34;max-age=31536000; includeSubDomains; preload\u0026#34; Header always set X-Content-Type-Options \u0026#34;nosniff\u0026#34; Header always set X-Frame-Options \u0026#34;SAMEORIGIN\u0026#34; Header always set X-XSS-Protection \u0026#34;1; mode=block\u0026#34; \u0026lt;/VirtualHost\u0026gt; ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:5:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#sample-apache-reverse-proxy-configuration"},{"categories":["devops"],"collections":null,"content":"The Data Security Risks 1. Decryption and Re-Encryption The critical point to note in this configuration is that the reverse proxy decrypts the incoming HTTPS request from the client and subsequently re-encrypts it before forwarding it to the backend server. This process creates a potential exposure point. 2. Data Exposure Within the Reverse Proxy During the brief moment when data is decrypted for processing within the reverse proxy, it becomes vulnerable. If an attacker gains unauthorized access to the reverse proxy server or if a security vulnerability exists in the proxy software, the unencrypted data could be exposed. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:5:1","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#the-data-security-risks"},{"categories":["devops"],"collections":null,"content":"The Data Security Risks 1. Decryption and Re-Encryption The critical point to note in this configuration is that the reverse proxy decrypts the incoming HTTPS request from the client and subsequently re-encrypts it before forwarding it to the backend server. This process creates a potential exposure point. 2. Data Exposure Within the Reverse Proxy During the brief moment when data is decrypted for processing within the reverse proxy, it becomes vulnerable. If an attacker gains unauthorized access to the reverse proxy server or if a security vulnerability exists in the proxy software, the unencrypted data could be exposed. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:5:1","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#1-decryption-and-re-encryption"},{"categories":["devops"],"collections":null,"content":"The Data Security Risks 1. Decryption and Re-Encryption The critical point to note in this configuration is that the reverse proxy decrypts the incoming HTTPS request from the client and subsequently re-encrypts it before forwarding it to the backend server. This process creates a potential exposure point. 2. Data Exposure Within the Reverse Proxy During the brief moment when data is decrypted for processing within the reverse proxy, it becomes vulnerable. If an attacker gains unauthorized access to the reverse proxy server or if a security vulnerability exists in the proxy software, the unencrypted data could be exposed. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:5:1","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#2-data-exposure-within-the-reverse-proxy"},{"categories":["devops"],"collections":null,"content":"Conclusion Prioritizing data privacy is crucial when it comes to secure data transmission. Port forwarding with autossh offers a robust solution for safeguarding sensitive information. While Apache reverse proxy offers advanced functionality, it comes with data visibility on the proxy server, potentially exposing data to risks. Understanding the strengths and limitations of each approach will help you make an informed decision that aligns with your specific needs for data security and privacy. Whether you\u0026rsquo;re handling sensitive financial data or protecting user information, the choice of port forwarding underscores your commitment to data privacy and security. ","date":"13-09-2023","objectID":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/:6:0","tags":["linux","apache"],"title":"Prioritizing Data Privacy for Secure Transmission with Port Forwarding","uri":"/posts/devops/prioritizing-data-privacy-for-secure-transmission-with-port-forwarding/#conclusion"},{"categories":["Development"],"collections":null,"content":"Managing files with meaningful and structured names can greatly enhance organization and accessibility. Suppose you have a collection of Markdown files containing Front Matter sections, each with a \u0026ldquo;date\u0026rdquo; field. You want to rename these files using the date from their Front Matter section to create a consistent and informative naming scheme. This article will guide you through achieving this task using a bash script on macOS. ","date":"01-09-2023","objectID":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/:0:0","tags":["bash","mac"],"title":"Renaming Markdown Files Based on Front Matter Date Using Bash Script","uri":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Basic familiarity with the command line interface (CLI). A macOS environment with bash, grep, awk, and date utilities. ","date":"01-09-2023","objectID":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/:1:0","tags":["bash","mac"],"title":"Renaming Markdown Files Based on Front Matter Date Using Bash Script","uri":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Scenario Let\u0026rsquo;s say you have a set of Markdown files in a folder. You want to rename these files based on the date mentioned in their Front Matter sections. Additionally, you want to exclude renaming any files named \u0026ldquo;README.md\u0026rdquo; or \u0026ldquo;template.md.\u0026rdquo; ","date":"01-09-2023","objectID":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/:2:0","tags":["bash","mac"],"title":"Renaming Markdown Files Based on Front Matter Date Using Bash Script","uri":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/#scenario"},{"categories":["Development"],"collections":null,"content":"Bash Script Implementation Here\u0026rsquo;s the step-by-step implementation of the bash script to accomplish the renaming task: Create the Bash Script Create a new file named rename_files.sh using your preferred text editor. Copy and paste the following script into the file: #!/bin/bash # Loop through all .md files in the current folder for file in *.md; do if [[ -f \u0026#34;$file\u0026#34; \u0026amp;\u0026amp; \u0026#34;$file\u0026#34; != \u0026#34;README.md\u0026#34; \u0026amp;\u0026amp; \u0026#34;$file\u0026#34; != \u0026#34;template.md\u0026#34; ]]; then # Read the date from Front Matter date=$(grep -oE \u0026#39;(date: )([0-9]{4}-[0-9]{2}-[0-9]{2})\u0026#39; \u0026#34;$file\u0026#34; | awk \u0026#39;{print $2}\u0026#39;) # Format the date to match the desired filename format formatted_date=$(date -j -f \u0026#34;%Y-%m-%d\u0026#34; \u0026#34;$date\u0026#34; +\u0026#39;%Y-%m-%d\u0026#39;) # Create the new filename new_filename=\u0026#34;${formatted_date}-${file}\u0026#34; # Rename the file mv \u0026#34;$file\u0026#34; \u0026#34;$new_filename\u0026#34; echo \u0026#34;Renamed: $file -\u0026gt; $new_filename\u0026#34; fi done Make the Script Executable Open your terminal and navigate to the directory containing the script. Run the following command to make the script executable: chmod +x rename_files.sh Run the Script Place the script in the same directory as your Markdown files. In the terminal, execute the script using the following command: ./rename_files.sh The script will loop through all .md files in the folder, excluding README.md and template.md, and rename them based on the date mentioned in their Front Matter sections. ","date":"01-09-2023","objectID":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/:3:0","tags":["bash","mac"],"title":"Renaming Markdown Files Based on Front Matter Date Using Bash Script","uri":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/#bash-script-implementation"},{"categories":["Development"],"collections":null,"content":"Conclusion Organizing files is essential for efficient management of your projects and content. By using a bash script to rename Markdown files based on the Front Matter date, you can achieve a consistent and meaningful naming convention. This approach not only helps you stay organized but also improves the clarity and relevance of your file names. With this article\u0026rsquo;s guidance, you can easily automate the renaming process and streamline your file management tasks on macOS. ","date":"01-09-2023","objectID":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/:4:0","tags":["bash","mac"],"title":"Renaming Markdown Files Based on Front Matter Date Using Bash Script","uri":"/posts/development/renaming-markdown-files-based-on-front-matter-date-using-bash-script/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"In the dynamic world of project management, a Kanban board is your canvas for orchestrating the symphony of tasks and ideas. Whether you\u0026rsquo;re a seasoned conductor or just tuning in, understanding how to move cards from the \u0026ldquo;New\u0026rdquo; stage to the \u0026ldquo;Backlog\u0026rdquo; and eventually to the \u0026ldquo;Ready\u0026rdquo; stage is essential for a harmonious workflow. Let\u0026rsquo;s delve into these stages with practical examples to guide your journey. ","date":"14-08-2023","objectID":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/:0:0","tags":["kanban"],"title":"Navigating Kanban Board From Idea to Execution","uri":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/#"},{"categories":["Productivity"],"collections":null,"content":"Moving from \u0026ldquo;New\u0026rdquo; to \u0026ldquo;Backlog\u0026rdquo; Picture this: you\u0026rsquo;re struck by a lightning bolt of inspiration. It\u0026rsquo;s like discovering a new flavor for your culinary masterpiece. You\u0026rsquo;ve got the concept, the vision, the heart of it all. Here\u0026rsquo;s where your \u0026ldquo;New\u0026rdquo; stage card shines: Card Title: Explore Social Media Integration Description: Venture into social media integration possibilities for enhancing user engagement. This card isn\u0026rsquo;t weighed down by intricate details yet. It\u0026rsquo;s the raw essence, the spark. It aligns with your project\u0026rsquo;s direction, and it has the potential to add value. It\u0026rsquo;s your concept on a napkin, ready to be developed into a full-fledged recipe. ","date":"14-08-2023","objectID":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/:1:0","tags":["kanban"],"title":"Navigating Kanban Board From Idea to Execution","uri":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/#moving-from-new-to-backlog"},{"categories":["Productivity"],"collections":null,"content":"Moving from \u0026ldquo;Backlog\u0026rdquo; to \u0026ldquo;Ready\u0026rdquo; As time goes by, your culinary creation becomes more than just an idea. It\u0026rsquo;s time to take that \u0026ldquo;Backlog\u0026rdquo; stage card and sprinkle it with the magic that turns dreams into reality: Card Title: Implement Social Media Sharing Description: Craft a seamless social media sharing experience to empower users to broadcast their achievements. The card now boasts a checklist of actionable steps, transforming your concept into a roadmap: Research available social media APIs Design intuitive UI/UX for sharing buttons Integrate the chosen social media API Develop robust backend logic for sharing content Conduct extensive testing across diverse devices and platforms Document the integration process for future reference With labels, assignees, and milestones, your card has evolved into a detailed recipe. Assignees @developer1 and @designer1 are poised to carry out the culinary masterpiece, while the milestone \u0026ldquo;Version 2.0\u0026rdquo; defines its place in the grand feast of updates. Remember, your \u0026ldquo;New\u0026rdquo; stage captures the spark of inspiration, and the \u0026ldquo;Backlog\u0026rdquo; stage infuses it with detailed ingredients. This process is akin to crafting a dish – from a fleeting idea to a sumptuous creation, every step counts. ","date":"14-08-2023","objectID":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/:2:0","tags":["kanban"],"title":"Navigating Kanban Board From Idea to Execution","uri":"/posts/productivity/navigating-kanban-board-from-idea-to-execution/#moving-from-backlog-to-ready"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with iCloud Drive on your Mac, specifically where the syncing indicator remains stuck on the iCloud sync icon in Finder, you can try the following steps to resolve the problem: ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:0:0","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Steps to Resolve the Issue ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:0","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#steps-to-resolve-the-issue"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 1: Disconnect from Wi-Fi Turn off your Mac\u0026rsquo;s Wi-Fi connection to make it offline. You can do this by clicking on the Wi-Fi icon in the menu bar and selecting \u0026ldquo;Turn Wi-Fi Off.\u0026rdquo; ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:1","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-1-disconnect-from-wi-fi"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 2: Access Activity Monitor Open the Activity Monitor on your Mac. You can find it in the Utilities folder within the Applications folder, or you can use Spotlight to search for \u0026ldquo;Activity Monitor.\u0026rdquo; ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:2","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-2-access-activity-monitor"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 3: Find and Terminate the \u0026ldquo;bird\u0026rdquo; Process In the Activity Monitor, locate the \u0026ldquo;bird\u0026rdquo; process. This process is responsible for iCloud syncing. Select the \u0026ldquo;bird\u0026rdquo; process and click on the \u0026ldquo;X\u0026rdquo; button in the toolbar, or right-click on the process and choose \u0026ldquo;Quit\u0026rdquo; to terminate it. ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:3","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-3-find-and-terminate-the-bird-process"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 4: Allow System to Recreate the \u0026ldquo;bird\u0026rdquo; Process After terminating the \u0026ldquo;bird\u0026rdquo; process, wait for a few moments. The system will automatically recreate the process. ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:4","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-4-allow-system-to-recreate-the-bird-process"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 5: Reconnect to Wi-Fi Turn your Wi-Fi back on by clicking on the Wi-Fi icon in the menu bar and selecting \u0026ldquo;Turn Wi-Fi On.\u0026rdquo; ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:5","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-5-reconnect-to-wi-fi"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 6: Verify iCloud Sync Check the iCloud sync icon in Finder. It should now sync properly without getting stuck. By following these steps, you can attempt to resolve the issue of iCloud Drive being stuck on uploading and ensure smooth syncing of your files using iCloud on your Mac. ","date":"30-06-2023","objectID":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/:1:6","tags":["mac"],"title":"Troubleshooting iCloud Drive Stuck on Uploading Issue on Mac","uri":"/posts/software/troubleshooting-icloud-drive-stuck-on-uploading-issue-on-mac/#step-6-verify-icloud-sync"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re using Ubuntu on a MacBook and prefer a text-based user interface, you may want to control the state of the monitor and keyboard lids. We will provide commands to switch the monitor and keyboard lids on and off, allowing you to customize your MacBook experience. ","date":"22-06-2023","objectID":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/:0:0","tags":["mac","linux"],"title":"Switching Monitor and Keyboard Lids on MacBook with Ubuntu Text-Based UI","uri":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/#"},{"categories":["Software"],"collections":null,"content":"Switching Monitor Lid To turn the monitor lid off and disable the backlight, use the following command: bash -c \u0026#34;echo 0 \u0026gt; /sys/class/backlight/intel_backlight/brightness;\u0026#34; To turn the monitor lid back on and enable the backlight, execute this command: bash -c \u0026#34;echo 1808 \u0026gt; /sys/class/backlight/intel_backlight/brightness;\u0026#34; ","date":"22-06-2023","objectID":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/:1:0","tags":["mac","linux"],"title":"Switching Monitor and Keyboard Lids on MacBook with Ubuntu Text-Based UI","uri":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/#switching-monitor-lid"},{"categories":["Software"],"collections":null,"content":"Switching Keyboard Lid To turn off the keyboard lid backlight, enter the following command: bash -c \u0026#34;echo 0 \u0026gt; /sys/class/leds/smc::kbd_backlight/brightness;\u0026#34; To turn on the keyboard lid backlight, use this command bash -c \u0026#34;echo 120 \u0026gt; /sys/class/leds/smc::kbd_backlight/brightness;\u0026#34; ","date":"22-06-2023","objectID":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/:2:0","tags":["mac","linux"],"title":"Switching Monitor and Keyboard Lids on MacBook with Ubuntu Text-Based UI","uri":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/#switching-keyboard-lid"},{"categories":["Software"],"collections":null,"content":"Creating Aliases for Convenience If you find yourself frequently using these commands, you can create aliases to simplify the process. Add the following lines to your shell\u0026rsquo;s configuration file (e.g., ~/.bashrc): alias keyboard-off=\u0026#39;bash -c \u0026#34;echo 0 \u0026gt; /sys/class/leds/smc::kbd_backlight/brightness;\u0026#34;\u0026#39; alias keyboard-on=\u0026#39;bash -c \u0026#34;echo 120 \u0026gt; /sys/class/leds/smc::kbd_backlight/brightness;\u0026#34;\u0026#39; alias monitor-off=\u0026#39;bash -c \u0026#34;echo 0 \u0026gt; /sys/class/backlight/intel_backlight/brightness;\u0026#34;\u0026#39; alias monitor-on=\u0026#39;bash -c \u0026#34;echo 1808 \u0026gt; /sys/class/backlight/intel_backlight/brightness;\u0026#34;\u0026#39; ","date":"22-06-2023","objectID":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/:3:0","tags":["mac","linux"],"title":"Switching Monitor and Keyboard Lids on MacBook with Ubuntu Text-Based UI","uri":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/#creating-aliases-for-convenience"},{"categories":["Software"],"collections":null,"content":"Conclusion With the provided commands and aliases, you can conveniently control the state of the monitor and keyboard lids on your MacBook running Ubuntu with a text-based user interface. Whether you prefer working in a dark environment or need the keyboard backlight for enhanced visibility, these commands will enable you to customize your setup to suit your preferences. Remember to execute the commands or aliases based on your desired lid state, and feel free to adjust the values to achieve your desired brightness levels. Enjoy the flexibility and customization options that Ubuntu provides on your MacBook! ","date":"22-06-2023","objectID":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/:4:0","tags":["mac","linux"],"title":"Switching Monitor and Keyboard Lids on MacBook with Ubuntu Text-Based UI","uri":"/posts/software/switching-monitor-and-keyboard-lids-on-macbook-with-ubuntu-text-based-ui/#conclusion"},{"categories":["Development"],"collections":null,"content":"In the world of software development, Git has become the go-to version control system for managing projects effectively. One aspect that greatly contributes to a streamlined development process is adopting a consistent and meaningful branch naming convention. In this article, we will delve into the best practices for Git branch naming, empowering teams to organize their work and collaborate efficiently. Branch Type Branch Name Examples develop develop main main feature feature/user-authentication feature/shopping-cart feature/payment-integration bugfix bugfix/fix-login-issue bugfix/1234 bugfix/calculate-total-error release release/1.0.0 release/alpha release/beta hotfix hotfix/critical-bug hotfix/4321 hotfix/typo-correction ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:0:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#"},{"categories":["Development"],"collections":null,"content":"1. Develop a develop Branch To foster ongoing development work, establish a dedicated branch named develop. This branch serves as the main staging area for integrating and testing new features before they are ready for production. By using this common convention, developers can easily identify where ongoing development work takes place. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:1:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#1-develop-a-develop-branch"},{"categories":["Development"],"collections":null,"content":"2. Mainstream the main Branch In recent years, the industry has shifted towards using main as the default branch name, replacing the traditional master. The main branch represents the stable and production-ready version of the codebase. It acts as the go-to branch for the latest release or the most recent stable version. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:2:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#2-mainstream-the-main-branch"},{"categories":["Development"],"collections":null,"content":"3. Feature Branches: Descriptive and Prefixed with feature/ When working on specific features or enhancements, it is beneficial to create feature branches. These branches isolate the development work for each feature and enable seamless collaboration. It is advisable to prefix feature branches with feature/ to enhance readability and organization. For example, feature/user-authentication and feature/shopping-cart are descriptive and easily convey the purpose of the branch. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:3:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#3-feature-branches-descriptive-and-prefixed-with-feature"},{"categories":["Development"],"collections":null,"content":"4. Bugfix Branches: Clear Identification with bugfix/ To address specific bugs or issues in the codebase, bugfix branches come in handy. It is recommended to prefix these branches with bugfix/ followed by a brief description or the associated issue number. This naming convention, such as bugfix/fix-login-issue or bugfix/1234, makes it easier to identify and resolve specific problems. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:4:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#4-bugfix-branches-clear-identification-with-bugfix"},{"categories":["Development"],"collections":null,"content":"5. Release Branches: Preparing for Production with release/ When preparing for a new release or a production deployment, release branches play a crucial role. These branches allow teams to stabilize the codebase, perform necessary fixes, and finalize the release. To denote release branches, the convention is to prefix them with release/ followed by the version number or a relevant identifier. For example, release/1.0.0 and release/beta clearly indicate the purpose of the branch. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:5:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#5-release-branches-preparing-for-production-with-release"},{"categories":["Development"],"collections":null,"content":"6. Hotfix Branches: Immediate Fixes with hotfix/ In urgent situations where critical issues or bugs require immediate attention in the production environment, hotfix branches are created. These branches are based on the main branch and are prefixed with hotfix/. The use of descriptive names, such as hotfix/critical-bug or hotfix/4321, ensures clear identification and prompt resolution of critical issues. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:6:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#6-hotfix-branches-immediate-fixes-with-hotfix"},{"categories":["Development"],"collections":null,"content":"Conclusion A well-defined Git branch naming convention is an indispensable aspect of efficient collaboration within development teams. By adopting the best practices outlined above, teams can maintain a clear structure, facilitate seamless integration, and streamline the development process. Consistency in branch naming conventions enables easy identification, effective communication, and enhanced organization. Embrace these practices, customize them to your team\u0026rsquo;s specific needs, and witness the benefits of a well-organized and collaborative development environment. ","date":"08-06-2023","objectID":"/posts/development/best-practices-for-git-branch-naming-convention/:7:0","tags":["git"],"title":"Best Practices for Git Branch Naming Convention","uri":"/posts/development/best-practices-for-git-branch-naming-convention/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"Habits are the building blocks of our lives, shaping our actions and outcomes. In his groundbreaking book, \u0026ldquo;Atomic Habits,\u0026rdquo; James Clear reveals the transformative power of small, incremental changes that compound over time. By harnessing the concept of atomic habits, we can make remarkable progress towards our goals and create lasting personal and professional growth. In this article, we will explore the core principles of atomic habits and provide practical examples of how to implement them in your daily life. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:0:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#"},{"categories":["Productivity"],"collections":null,"content":"Starting with Small Habits One of the fundamental ideas of atomic habits is to break down larger goals into smaller, manageable tasks. By focusing on one or two habits at a time, we avoid overwhelming ourselves and increase the likelihood of success. For example, if you want to exercise regularly, start with a simple habit of doing a 10-minute workout each day and gradually increase the duration as it becomes more ingrained. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:1:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#starting-with-small-habits"},{"categories":["Productivity"],"collections":null,"content":"Making Habits Obvious Clear emphasizes the importance of clearly defining cues and triggers for our desired habits. Placing visual reminders in our environment, such as sticky notes or alarms, can prompt us to take action. For instance, if you aim to drink more water, keeping a water bottle on your desk or setting reminders on your phone can serve as reminders to stay hydrated. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:2:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#making-habits-obvious"},{"categories":["Productivity"],"collections":null,"content":"Making Habits Attractive Finding ways to make our habits more appealing increases the likelihood of consistent practice. If you want to establish a reading habit, choose books that align with your interests and passions. By immersing yourself in captivating literature, you will be naturally drawn to read more regularly. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:3:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#making-habits-attractive"},{"categories":["Productivity"],"collections":null,"content":"Making Habits Easy Simplifying our habits reduces the friction and barriers to performing them. For example, if you aspire to meditate daily, create a designated meditation space in your home and keep a cushion or mat readily available. By eliminating the need to set up each time, you make the habit more effortless and seamless to practice. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:4:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#making-habits-easy"},{"categories":["Productivity"],"collections":null,"content":"Making Habits Satisfying Creating a reward system that reinforces our habits is crucial for long-term adherence. Celebrate small wins and acknowledge your progress along the way. If you accomplish your daily exercise routine, treat yourself to a healthy snack or indulge in a relaxing activity you enjoy. By associating positive emotions with our habits, we reinforce their value and motivation to continue. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:5:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#making-habits-satisfying"},{"categories":["Productivity"],"collections":null,"content":"Tracking Progress and Staying Consistent Consistency is key to habit formation. By tracking our progress using habit trackers or journals, we can visually monitor our daily habits and stay accountable. Whether it\u0026rsquo;s ticking off completed tasks or maintaining a journal of reflections, tracking helps us measure our progress and adjust our approach as needed. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:6:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#tracking-progress-and-staying-consistent"},{"categories":["Productivity"],"collections":null,"content":"Adapting and Creating a Supportive Environment Flexibility and adaptability are vital for sustainable habits. If a habit isn\u0026rsquo;t working for you or doesn\u0026rsquo;t align with your goals, don\u0026rsquo;t be afraid to modify or replace it. Additionally, designing your environment to support positive behaviors is essential. Remove distractions, create dedicated spaces for specific habits, and surround yourself with like-minded individuals who encourage and support your journey. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:7:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#adapting-and-creating-a-supportive-environment"},{"categories":["Productivity"],"collections":null,"content":"Focusing on the Process Shifting our focus from the end results to the daily practice is crucial. Embrace the habit as part of your identity and enjoy the process of continuous improvement. By relishing in the present moment and finding joy in the journey, habits become less daunting and more fulfilling. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:8:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#focusing-on-the-process"},{"categories":["Productivity"],"collections":null,"content":"Example Here are some examples of how you can apply the principles of \u0026ldquo;Atomic Habits\u0026rdquo; to specific habits: ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:9:0","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#example"},{"categories":["Productivity"],"collections":null,"content":"Habit: Drinking more water Make it obvious: Keep a water bottle on your desk or carry one with you at all times as a visual reminder to stay hydrated. Make it attractive: Infuse your water with slices of fruit or herbs to add flavor and make it more appealing. Make it easy: Set reminders on your phone or use a habit-tracking app to prompt you to drink water regularly throughout the day. Make it satisfying: Celebrate reaching your daily water intake goal by treating yourself to a healthy snack or rewarding yourself with a short break. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:9:1","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#habit-drinking-more-water"},{"categories":["Productivity"],"collections":null,"content":"Habit: Daily exercise Make it obvious: Lay out your workout clothes and equipment the night before as a visual cue to exercise in the morning. Make it attractive: Choose a form of exercise that you genuinely enjoy, whether it\u0026rsquo;s dancing, cycling, or playing a sport. Make it easy: Start with short, manageable workouts or set a timer for a specific duration to make it feel more achievable. Make it satisfying: Create a workout playlist of your favorite songs or reward yourself with a relaxing stretch session or a post-workout smoothie. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:9:2","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#habit-daily-exercise"},{"categories":["Productivity"],"collections":null,"content":"Habit: Reading more books Make it obvious: Keep a book or e-reader on your nightstand or in your bag to remind you to read during your free moments. Make it attractive: Select books that align with your interests or explore different genres to find what captivates you. Make it easy: Set aside a dedicated reading time each day, such as 20 minutes before bed or during your lunch break. Make it satisfying: Create a cozy reading nook with a comfortable chair, good lighting, and your favorite beverage to enhance the reading experience. Treat yourself to a new book once you finish one. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:9:3","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#habit-reading-more-books"},{"categories":["Productivity"],"collections":null,"content":"Habit: Practicing gratitude Make it obvious: Place a gratitude journal or notebook in a visible location, such as on your bedside table or at your desk. Make it attractive: Decorate your gratitude journal with inspiring quotes or use colorful pens to make writing in it more enjoyable. Make it easy: Set a reminder on your phone or tie the habit to an existing routine, such as writing three things you\u0026rsquo;re grateful for before bed. Make it satisfying: Reflect on your entries periodically and appreciate the positive impact gratitude has on your mindset and well-being. Remember to adapt these examples to suit your preferences and goals. The key is to implement small, consistent actions and apply the principles of making habits obvious, attractive, easy, and satisfying to establish lasting habits. ","date":"08-06-2023","objectID":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/:9:4","tags":null,"title":"The Power of Atomic Habits Transforming Your Life One Step at a Time","uri":"/posts/productivity/the-power-of-atomic-habits-transforming-your-life-one-step-at-a-time/#habit-practicing-gratitude"},{"categories":["DevOps"],"collections":null,"content":"Naming conventions are a crucial aspect of software development, providing consistency and clarity in various stages of the development lifecycle. In this article, we will explore the most common naming conventions for production environments, focusing on GitLab CI/CD, GitHub Actions, domain names, branch names, Docker Compose files, and Dockerfile names. Naming Common Convention GitLab CI/CD Convention prod GitHub Actions Convention prod Domain Names example.com Branch Names main/master Docker Compose Files docker-compose.yml Dockerfile Names Dockerfile ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:0:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#"},{"categories":["DevOps"],"collections":null,"content":"GitLab CI/CD Convention In GitLab CI/CD, the convention for naming pipeline stages in production environments commonly involves using \u0026ldquo;prod\u0026rdquo; as the keyword. This convention ensures that pipeline stages are easily identifiable and aligned with the production deployment process. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:1:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#gitlab-cicd-convention"},{"categories":["DevOps"],"collections":null,"content":"GitHub Actions Convention Similar to GitLab CI/CD, GitHub Actions also adopts the convention of using \u0026ldquo;prod\u0026rdquo; to name pipeline stages in production environments. This convention provides consistency and clarity within GitHub Actions workflows for production deployments. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:2:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#github-actions-convention"},{"categories":["DevOps"],"collections":null,"content":"Domain Names When it comes to domain names for production environments, it is common practice to use a primary domain name without any specific prefix or suffix. For instance, \u0026ldquo;example.com\u0026rdquo; represents the main production domain where the application or service is hosted. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:3:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#domain-names"},{"categories":["DevOps"],"collections":null,"content":"Branch Names In terms of branch names, the most commonly used convention for production environments is to use \u0026ldquo;main\u0026rdquo; or \u0026ldquo;master\u0026rdquo; as the branch name. These names indicate the primary branch where the stable and production-ready code resides. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:4:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#branch-names"},{"categories":["DevOps"],"collections":null,"content":"Docker Compose Files For Docker Compose files specific to production environments, the common convention is to use \u0026ldquo;docker-compose.yml\u0026rdquo; as the filename. This naming convention implies that the file represents the overall composition and configuration of the application in the production environment. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:5:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#docker-compose-files"},{"categories":["DevOps"],"collections":null,"content":"Dockerfile Names When naming Dockerfiles targeting production environments, it is customary to use the straightforward name \u0026ldquo;Dockerfile\u0026rdquo; without any additional suffixes. This convention signifies that the Dockerfile contains instructions for building the production-ready image of the application. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:6:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#dockerfile-names"},{"categories":["DevOps"],"collections":null,"content":"Conclusion Consistency in naming conventions is essential for effective communication and collaboration within production environments. The most common conventions for production environments include using \u0026ldquo;prod\u0026rdquo; as the naming convention for pipeline stages in GitLab CI/CD and GitHub Actions. Domain names typically utilize the primary domain without any specific prefixes or suffixes. Branch names commonly use \u0026ldquo;main\u0026rdquo; or \u0026ldquo;master\u0026rdquo; to represent the primary production branch. Docker Compose files are often named \u0026ldquo;docker-compose.yml\u0026rdquo; to reflect the overall composition of the application in the production environment. Lastly, Dockerfiles targeting production environments are typically named \u0026ldquo;Dockerfile\u0026rdquo; without any additional suffixes. While these conventions are widely used, it\u0026rsquo;s important to consider the specific requirements of your project and align with your team\u0026rsquo;s preferences. By following best practices in naming conventions, you can enhance clarity, maintain consistency, and ensure smooth collaboration within your production environments. Remember, naming conventions should be agreed upon within your team and documented to ensure everyone understands and follows the established conventions consistently. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-production-environments-/:7:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Production Environments","uri":"/posts/devops/naming-conventions-for-production-environments-/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Naming conventions play a vital role in software development environments, providing consistency and clarity throughout the development process. In this article, we will explore the most common naming conventions for development environments, covering GitLab CI/CD, GitHub Actions, domain names, branch names, Docker Compose files, and Dockerfile names. Naming Common Convention GitLab CI/CD Convention dev GitHub Actions Convention dev Domain Names dev.example.com Branch Names dev Docker Compose Files docker-compose.dev.yml Dockerfile Names Dockerfile.dev ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:0:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#"},{"categories":["DevOps"],"collections":null,"content":"GitLab CI/CD Convention When working with GitLab CI/CD, the most common convention for naming pipeline stages in development environments is to use \u0026ldquo;dev\u0026rdquo; as the keyword. This convention ensures compatibility and alignment with GitLab CI/CD\u0026rsquo;s pipeline system. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:1:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#gitlab-cicd-convention"},{"categories":["DevOps"],"collections":null,"content":"GitHub Actions Convention Similarly, in GitHub Actions workflows, \u0026ldquo;dev\u0026rdquo; is commonly used to name pipeline stages in development environments. This convention provides consistency and ease of understanding within the GitHub Actions ecosystem. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:2:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#github-actions-convention"},{"categories":["DevOps"],"collections":null,"content":"Domain Names For development environments, it is common to use domain names such as \u0026ldquo;dev.example.com\u0026rdquo; to represent the development environment. This naming convention clearly indicates that the domain is associated with the development phase of the software. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:3:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#domain-names"},{"categories":["DevOps"],"collections":null,"content":"Branch Names When naming branches in development environments, using \u0026ldquo;dev\u0026rdquo; as a prefix or suffix is a widely adopted convention. For example, \u0026ldquo;dev-feature-branch\u0026rdquo; or \u0026ldquo;feature-branch-dev\u0026rdquo; clearly indicate that the branch is related to development work. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:4:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#branch-names"},{"categories":["DevOps"],"collections":null,"content":"Docker Compose Files In the context of Docker Compose files, the most common naming convention for development environments is \u0026ldquo;docker-compose.dev.yml\u0026rdquo;. This naming pattern makes it clear that the file is intended for use in the development environment, ensuring proper organization and understanding. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:5:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#docker-compose-files"},{"categories":["DevOps"],"collections":null,"content":"Dockerfile Names When it comes to Dockerfiles targeting development environments, the convention often follows the pattern of \u0026ldquo;Dockerfile.dev\u0026rdquo;. This naming convention explicitly communicates the purpose of the Dockerfile, making it easier for developers to identify the appropriate file. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:6:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#dockerfile-names"},{"categories":["DevOps"],"collections":null,"content":"Conclusion Consistent and meaningful naming conventions are essential in development environments to ensure clarity, organization, and effective collaboration. The most common conventions for development environments include using \u0026ldquo;dev\u0026rdquo; as the naming convention for GitLab CI/CD and GitHub Actions pipeline stages, incorporating \u0026ldquo;dev\u0026rdquo; in domain names, branch names, Docker Compose files (\u0026ldquo;docker-compose.dev.yml\u0026rdquo;), and Dockerfiles (\u0026ldquo;Dockerfile.dev\u0026rdquo;). While these conventions are widely used, it\u0026rsquo;s important to consider the specific requirements of your project and align with your team\u0026rsquo;s preferences. By following best practices in naming conventions, you can enhance the overall development experience and promote efficient collaboration among team members. Remember, naming conventions are not rigid rules, and it\u0026rsquo;s always advisable to establish and adhere to agreed-upon conventions within your development team for consistency and ease of understanding. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-for-development-environments-/:7:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions for Development Environments","uri":"/posts/devops/naming-conventions-for-development-environments-/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Naming conventions play a crucial role in software development, providing clarity and consistency in various aspects of the development lifecycle. When it comes to naming environments, such as staging, development, and production, there can be variations and debates around the usage of \u0026ldquo;stage\u0026rdquo; and \u0026ldquo;staging.\u0026rdquo; In this article, we\u0026rsquo;ll explore the differences and common practices surrounding these terms. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:0:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#"},{"categories":["DevOps"],"collections":null,"content":"Defining Staging and Development Environments Naming Common Convention GitLab CI/CD Convention stage GitHub Actions Convention staging Domain Names staging.example.com Branch Names staging Docker Compose Files docker-compose.staging.yml Dockerfile Names Dockerfile.staging Staging environments are integral to the software development process, serving as a dedicated space for testing and validation before deploying to production. In most cases, \u0026ldquo;staging\u0026rdquo; is commonly used as a noun to represent this environment. It refers to a stable and controlled testing area where developers can ensure their applications are functioning correctly and meet the required standards. On the other hand, \u0026ldquo;development\u0026rdquo; is typically used as a noun to denote the environment where active coding and building of applications occur. It is the primary workspace for developers to create, modify, and test their code before it reaches the staging or production stages. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:1:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#defining-staging-and-development-environments"},{"categories":["DevOps"],"collections":null,"content":"GitLab CI/CD Convention GitLab CI/CD, a popular continuous integration and continuous deployment platform, follows its own convention for naming pipeline stages. Instead of \u0026ldquo;staging,\u0026rdquo; GitLab CI/CD uses \u0026ldquo;stage\u0026rdquo; as the keyword to define pipeline stages. This convention differs from other industry practices but is specific to the GitLab CI/CD ecosystem. It\u0026rsquo;s important to align with this convention when using GitLab CI/CD to ensure smooth integration and compatibility with their pipeline system. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:2:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#gitlab-cicd-convention"},{"categories":["DevOps"],"collections":null,"content":"GitHub Actions Convention In contrast to GitLab CI/CD, GitHub Actions, another widely used CI/CD platform, does not have a specific convention for naming pipeline stages. You have the flexibility to choose either \u0026ldquo;stage\u0026rdquo; or \u0026ldquo;staging\u0026rdquo; based on your preference or team conventions. However, it\u0026rsquo;s important to note that consistency within your GitHub Actions workflows is crucial for clarity and understanding. However, it\u0026rsquo;s worth noting that \u0026ldquo;stage\u0026rdquo; is more commonly used in GitHub Actions workflows. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:3:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#github-actions-convention"},{"categories":["DevOps"],"collections":null,"content":"Domain and Branch Naming Naming conventions for staging environments also extend to domain names and branch names. When it comes to domain names, it is common to use subdomains such as \u0026ldquo;staging.example.com\u0026rdquo; or \u0026ldquo;stage.example.com\u0026rdquo; to represent the staging environment. Both variations are widely used, but \u0026ldquo;staging\u0026rdquo; is slightly more prevalent in practice. For branch names, both \u0026ldquo;stage\u0026rdquo; and \u0026ldquo;staging\u0026rdquo; can be used to indicate the branch\u0026rsquo;s purpose or association with the staging environment. However, \u0026ldquo;staging\u0026rdquo; is more commonly preferred in the industry. It clearly conveys that the branch is specifically for staging-related work and aligns with the standard naming conventions. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:4:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#domain-and-branch-naming"},{"categories":["DevOps"],"collections":null,"content":"Docker Compose and Dockerfile Naming When using Docker for containerization, naming conventions for Docker Compose files and Dockerfiles can also come into play. For Docker Compose files specific to staging environments, \u0026ldquo;docker-compose.staging.yml\u0026rdquo; is a common naming convention. It clearly denotes that the file is intended for the staging environment and helps maintain organization and clarity. Similarly, for Dockerfiles targeting the staging environment, the naming convention often follows the pattern of \u0026ldquo;Dockerfile.staging.\u0026rdquo; This naming convention aligns with the purpose of the Dockerfile and ensures that developers understand its intended use. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:5:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#docker-compose-and-dockerfile-naming"},{"categories":["DevOps"],"collections":null,"content":"Conclusion Naming conventions play an essential role in software development environments. While \u0026ldquo;staging\u0026rdquo; is commonly used as a noun to represent the testing and validation environment, \u0026ldquo;stage\u0026rdquo; is specifically utilized within GitLab CI/CD for pipeline stages. ","date":"06-06-2023","objectID":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/:6:0","tags":["gitlab","github","git","docker"],"title":"Naming Conventions Stage vs Staging in Software Development","uri":"/posts/devops/naming-conventions-stage-vs-staging-in-software-development/#conclusion"},{"categories":["Development"],"collections":null,"content":"Building powerful and inexpensive web APIs requires careful consideration of the programming language you use. This article compares various programming languages ​​for their suitability for handling asynchronous operations, scalability, and resource utilization. We dive deep into the strengths and limitations of languages ​​such as Node.js, Go, Java, Python, Ruby, and PHP to help you make informed decisions when choosing the right language for your project. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:0:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#"},{"categories":["Development"],"collections":null,"content":"Node.js Event-driven asynchronous programming Node.js is known for its asynchronous, event-driven programming model. It excels at handling concurrent requests by using a single thread and non-blocking I/O operations. However, due to its single-threaded nature, it may not be ideal for heavy CPU-based tasks. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:1:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#nodejs"},{"categories":["Development"],"collections":null,"content":"Go Efficiency and Native Parallelism Go is a statically typed, compiled language that offers great efficiency and low resource consumption. A native concurrency model enables efficient handling of high levels of concurrency and heavy workloads. Go can use multiple threads or processes, making it suitable for both I/O and CPU intensive tasks. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:2:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#go"},{"categories":["Development"],"collections":null,"content":"Java Concurrency and scalability Java is a mature language with strong support for concurrency and scalability. It provides multi-threading capabilities and rich libraries and frameworks for building asynchronous applications. Java\u0026rsquo;s performance, scalability, and rich ecosystem make it a popular choice for building powerful web APIs. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:3:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#java"},{"categories":["Development"],"collections":null,"content":"Python, Ruby, PHP Synchronous execution Languages ​​such as Python, Ruby, and PHP traditionally follow a synchronous execution model. However, libraries and frameworks are available in these languages ​​that introduce varying degrees of asynchronous behavior. It may not offer native concurrency support like Go or Java, but it can be used to build asynchronous applications. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:4:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#python-ruby-php"},{"categories":["Development"],"collections":null,"content":"Cost-effectiveness considerations A language\u0026rsquo;s cost-effectiveness depends on many factors, including resource usage, scalability, developer productivity, and overall application architecture. Go\u0026rsquo;s efficiency and low resource consumption make it cost-effective when handling many concurrent requests. Node.js is also cost effective in certain scenarios due to its large ecosystem and developer productivity. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:5:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#cost-effectiveness-considerations"},{"categories":["Development"],"collections":null,"content":"Conclusion Choosing the right language for your asynchronous web API requires careful evaluation of your project needs, expected traffic volume, scalability requirements, and available resources. Go and Java are notable for their native concurrency support and scalability. However, factors such as developer expertise, ecosystem requirements, and specific project requirements should also be considered. Conduct performance tests, benchmarks, and assess long-term maintenance and scalability to make informed decisions that align with your project goals. Keep in mind that there is no one-size-fits-all solution, and the best choice depends on each situation. ","date":"03-06-2023","objectID":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/:6:0","tags":["nodejs","go","java","python","ruby","php","web","api"],"title":"Choosing the Right Language for Asynchronous Web APIs","uri":"/posts/development/choosing-the-right-language-for-asynchronous-web-apis/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Serverless architecture has revolutionized the way we develop and deploy applications, offering numerous benefits such as cost-effectiveness, scalability, and reduced operational overhead. In this article, we\u0026rsquo;ll delve into the world of serverless architecture, discussing its advantages, challenges, and important considerations when working with different serverless vendors. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:0:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#"},{"categories":["DevOps"],"collections":null,"content":"Understanding Serverless Architecture Serverless architecture, often referred to as Function-as-a-Service (FaaS), is a cloud computing model where developers can build and run applications without the need to manage underlying infrastructure. Key characteristics of serverless include: Pay-per-use pricing model: With serverless, you only pay for the actual execution time and resources consumed by your code. Event-driven execution: Serverless functions are triggered by events such as HTTP requests, database changes, or messages from a message queue. Automatic scaling: Serverless platforms handle the scaling of your applications automatically based on the incoming workload. These characteristics allow developers to focus on writing code and business logic, rather than dealing with infrastructure provisioning and management. Serverless platforms abstract away the complexity of infrastructure, providing a highly scalable and cost-effective environment for running applications. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:1:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#understanding-serverless-architecture"},{"categories":["DevOps"],"collections":null,"content":"Serverless Vendors Several major cloud providers offer serverless platforms, each with its own set of features, integrations, and pricing models. The most popular serverless vendors include: AWS Lambda: Amazon Web Services (AWS) Lambda is a leading serverless platform that supports multiple programming languages and offers tight integration with other AWS services like API Gateway, DynamoDB, and S3. Google Cloud Functions: Google Cloud Functions is Google\u0026rsquo;s serverless offering, supporting languages such as Node.js, Python, and Go. It seamlessly integrates with other Google Cloud services like Cloud Pub/Sub, Cloud Storage, and Firestore. Azure Functions: Microsoft Azure Functions allows developers to build serverless applications using popular languages such as C#, Java, and PowerShell. It integrates well with other Azure services like Azure Storage, Cosmos DB, and Event Grid. Cloudflare Workers: Cloudflare Workers is a serverless platform built on the edge of Cloudflare\u0026rsquo;s global network. It enables running serverless functions in locations close to the end-users, providing low-latency and high-performance execution. When choosing a serverless vendor, consider factors such as language support, available integrations, deployment options, and pricing models. Each vendor has its own strengths and nuances, so it\u0026rsquo;s essential to evaluate which platform best fits your specific requirements. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:2:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#serverless-vendors"},{"categories":["DevOps"],"collections":null,"content":"Cost-Effectiveness of Serverless Serverless architecture offers cost savings compared to traditional server-based models. With serverless, you only pay for the actual usage of resources during code execution, rather than paying for idle server time. This \u0026ldquo;pay-per-use\u0026rdquo; pricing model makes serverless ideal for applications with unpredictable or variable workloads. Additionally, serverless platforms handle automatic scaling, dynamically allocating resources based on the incoming workload. This scalability eliminates the need for capacity planning and overprovisioning, further reducing costs. It\u0026rsquo;s important to note that while serverless can be cost-effective for many use cases, there are scenarios where the cost can be higher compared to traditional hosting models. Carefully analyze your application\u0026rsquo;s requirements and expected usage patterns to determine the cost-effectiveness of serverless for your specific use case. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:3:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#cost-effectiveness-of-serverless"},{"categories":["DevOps"],"collections":null,"content":"Vendor Lock-In Concerns Vendor lock-in is a common concern when adopting serverless architecture. As each vendor provides its own set of proprietary features and integrations, migrating from one vendor to another can be challenging. To mitigate vendor lock-in risks, consider designing your application with portability in mind. One approach is to abstract vendor-specific functionality using interfaces and abstraction layers. For example, you can define a common interface for database operations, allowing you to switch between different serverless vendors without modifying the core application logic. By decoupling your code from vendor-specific implementations, you gain flexibility and maintain the ability to switch vendors or adopt a multi-cloud strategy if needed. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:4:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#vendor-lock-in-concerns"},{"categories":["DevOps"],"collections":null,"content":"Considerations for Code Migration When migrating code between different serverless vendors, there are a few considerations to keep in mind. While serverless platforms generally support popular programming languages, there may be differences in how you write and structure your code. For example, if you\u0026rsquo;re using a Node.js framework like Express.js on AWS Lambda, you can easily run your Express.js application without significant modifications. However, if you\u0026rsquo;re migrating to another vendor like Google Cloud Functions or Cloudflare Workers, you may need to adjust the code to work with their respective frameworks or adapt to their request handling mechanisms. Additionally, when it comes to database connectivity, you may need to modify the code to work with the database services provided by the target vendor. For instance, if you\u0026rsquo;re using the GORM library with Go and AWS Lambda, you might need to adjust the code to work with the database service offered by Google Cloud or Cloudflare. Other considerations include managing environment configurations, handling routing and middleware logic, implementing logging and error handling, and ensuring your code adheres to the security practices and limitations imposed by the target serverless platform. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:5:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#considerations-for-code-migration"},{"categories":["DevOps"],"collections":null,"content":"Conclusion Serverless architecture offers numerous benefits, including cost-effectiveness, scalability, and reduced operational burden. By leveraging serverless platforms like AWS Lambda, Google Cloud Functions, or Cloudflare Workers, developers can focus on writing code and building applications without worrying about infrastructure management. However, it\u0026rsquo;s crucial to understand the vendor-specific nuances and consider portability when building serverless applications. By following best practices, abstracting functionality, and being mindful of potential vendor lock-in, developers can harness the power of serverless while maintaining flexibility and adaptability. Remember, the choice of serverless vendor and the design of your application play key roles in maximizing the benefits of serverless architecture. Evaluate your requirements, consider the trade-offs, and choose the platform that best aligns with your needs. ","date":"03-06-2023","objectID":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/:6:0","tags":["aws","gcp","azure","cloudflare"],"title":"Exploring the Benefits and Considerations of Serverless Architecture","uri":"/posts/devops/exploring-the-benefits-and-considerations-of-serverless-architecture/#conclusion"},{"categories":["Development"],"collections":null,"content":"Within this article, you\u0026rsquo;ll find a prime example of a personal website outline that serves as a valuable reference. This exemplary outline showcases the best practices and design principles employed by experts in the field, offering inspiration and guidance for your web development projects. ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:0:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#"},{"categories":["Development"],"collections":null,"content":"Homepage Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Hero section: Large, visually appealing image or video: [Image showcasing React and Go development] Headline: John Smith Subheadline: Full-Stack Developer Specializing in React and Go Call to action (CTA) button: View My Work Introduction section: Title: Welcome to My Portfolio Brief introduction: I\u0026rsquo;m John Smith, a full-stack developer specializing in React and Go. I have a passion for building high-quality web applications that deliver exceptional user experiences. Highlights section: Title: Highlights Skills: React, Go, Redux, RESTful APIs, HTML, CSS Projects: Project 1: Title: E-commerce Website Description: Developed a full-fledged e-commerce website using React, Redux, and Go. Integrated with payment gateways and implemented secure user authentication. Project 2: Title: Task Management App Description: Created a task management application using React for the front end and Go for the back end. Implemented real-time updates and user collaboration features. ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:1:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#homepage"},{"categories":["Development"],"collections":null,"content":"Services page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Services offered: Title: Services Service 1: Title: Full-Stack Web Development Description: Develop end-to-end web applications using React for the front end and Go for the back end, ensuring seamless integration and optimal performance. Service 2: Title: Custom Application Development Description: Build custom web applications tailored to the unique needs and requirements of clients, leveraging the power of React and Go to deliver robust and scalable solutions. Service 3: Title: Single-Page Application (SPA) Development Description: Create interactive and responsive single-page applications using React and Go, providing a smooth and seamless user experience. Service 4: Title: API Development and Integration Description: Develop RESTful APIs using Go to power the backend of web applications, enabling seamless integration with various third-party services and systems. Service 5: Title: Real-Time Application Development Description: Build real-time applications such as chat systems, collaborative tools, or live dashboards using technologies like React and Go, with features like WebSockets for instant data updates. Service 6: Title: E-commerce Website Development Description: Create feature-rich e-commerce websites using React and Go, integrating with payment gateways, implementing secure user authentication, and optimizing for high performance and scalability. Service 7: Title: Mobile Application Development Description: Utilize React Native, a framework based on React, to develop cross-platform mobile applications with native-like performance, and use Go for the backend APIs and services. Service 8: Title: Progressive Web App (PWA) Development Description: Develop progressive web applications that offer app-like experiences across various devices and platforms using React and Go, enabling offline access and push notifications. Service 9: Title: Microservices Architecture Development Description: Design and implement microservices architectures using Go to build scalable, independent, and loosely coupled services, and leverage React for front-end interfaces that consume these services. Service 10: Title: Code Review and Optimization Description: Perform code reviews and optimization for existing React and Go projects, identifying performance bottlenecks, enhancing code quality, and suggesting improvements for scalability and maintainability. ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:2:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#services-page"},{"categories":["Development"],"collections":null,"content":"About page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Introduction: Title: About Me Introduction: Welcome to my portfolio website! I\u0026rsquo;m John Smith, a passionate full-stack developer with expertise in React and Go. Skills and Expertise: I have a deep understanding of front-end development using React, building robust APIs with Go, and deploying scalable applications. Experience: Over the past 5 years, I have worked on various projects, ranging from small startups to enterprise-level applications. Education and Certifications: I hold a Bachelor\u0026rsquo;s degree in Computer Science and have completed certifications in React and Go. ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:3:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#about-page"},{"categories":["Development"],"collections":null,"content":"Skills page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Technical Skills: Title: Technical Skills Front-End Development: React, Redux, JavaScript, HTML, CSS, Responsive Design Back-End Development: Go (Golang), RESTful APIs, Database Design (SQL/NoSQL) Tools and Libraries: Git, Webpack, Babel, Jest, Enzyme ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:4:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#skills-page"},{"categories":["Development"],"collections":null,"content":"Projects page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Project 1: Title: E-commerce Website Description: Developed a full-fledged e-commerce website using React, Redux, and Go. Integrated with payment gateways and implemented secure user authentication. Technologies Used: React, Redux, Go, PostgreSQL, JWT, Stripe GitHub Repository: [Link to GitHub repository] Project 2: Title: Task Management App Description: Created a task management application using React for the front end and Go for the back end. Implemented real-time updates and user collaboration features Technologies Used: React, Go, MongoDB, WebSockets GitHub Repository: [Link to GitHub repository] ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:5:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#projects-page"},{"categories":["Development"],"collections":null,"content":"Blog page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Blog Post 1: Title: Building Scalable Web Applications with React and Go Excerpt: Learn how to leverage the power of React and Go to develop high-performance and scalable web applications. Read More Blog Post 2: Title: Testing React Components with Jest and Enzyme Excerpt: Discover the best practices for testing React components using Jest and Enzyme, and ensure the quality of your code. Read More ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:6:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#blog-page"},{"categories":["Development"],"collections":null,"content":"Contact page Header: Logo or site name: [Your Personal Logo] Navigation menu: Home | About | Skills | Projects | Blog | Services | Contact Search bar: [Search Bar] Contact Form: Title: Contact Me Name: [Input field] Email: [Input field] Subject: [Input field] Message: [Textarea field] Submit button: Send Message ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:7:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#contact-page"},{"categories":["Development"],"collections":null,"content":"Footer (common to all pages) Copyright © 2023 John Smith. All rights reserved. Privacy Policy | Terms of Service | Sitemap | Contact ","date":"30-05-2023","objectID":"/posts/development/personal-website-outline/:8:0","tags":["web"],"title":"Personal Website Outline","uri":"/posts/development/personal-website-outline/#footer-common-to-all-pages"},{"categories":null,"collections":null,"content":"Welcome 👋 Hi, I\u0026rsquo;m Dimas, a software developer with a passion for building innovative solutions. With a strong background in programming and a love for problem-solving, I enjoy taking on new challenges and collaborating with others to bring their ideas to life. ","date":"29-05-2023","objectID":"/about/:0:0","tags":null,"title":"About","uri":"/about/#welcome-"},{"categories":null,"collections":null,"content":"💻 About Me With several years of experience in programming, I\u0026rsquo;ve had the opportunity to work on a wide range of projects, from developing web applications to creating mobile apps. I\u0026rsquo;m driven by a desire to make technology accessible to everyone, regardless of their background or skill level. I believe that coding should be fun and empowering, not intimidating or frustrating. ","date":"29-05-2023","objectID":"/about/:1:0","tags":null,"title":"About","uri":"/about/#-about-me"},{"categories":null,"collections":null,"content":"🧰 Tools As a developer, having the right tools can make all the difference. I\u0026rsquo;ve curated a selection of my go-to tools that help me stay productive and efficient. Below, you\u0026rsquo;ll find a list of some of the most important tools in my toolbelt. ","date":"29-05-2023","objectID":"/about/:2:0","tags":null,"title":"About","uri":"/about/#-tools"},{"categories":null,"collections":null,"content":"Languages ","date":"29-05-2023","objectID":"/about/:2:1","tags":null,"title":"About","uri":"/about/#languages"},{"categories":null,"collections":null,"content":"Frameworks ","date":"29-05-2023","objectID":"/about/:2:2","tags":null,"title":"About","uri":"/about/#frameworks"},{"categories":null,"collections":null,"content":"Databases ","date":"29-05-2023","objectID":"/about/:2:3","tags":null,"title":"About","uri":"/about/#databases"},{"categories":null,"collections":null,"content":"Hosting and SaaS ","date":"29-05-2023","objectID":"/about/:2:4","tags":null,"title":"About","uri":"/about/#hosting-and-saas"},{"categories":null,"collections":null,"content":"Servers ","date":"29-05-2023","objectID":"/about/:2:5","tags":null,"title":"About","uri":"/about/#servers"},{"categories":null,"collections":null,"content":"Testing ","date":"29-05-2023","objectID":"/about/:2:6","tags":null,"title":"About","uri":"/about/#testing"},{"categories":null,"collections":null,"content":"Version Control ","date":"29-05-2023","objectID":"/about/:3:0","tags":null,"title":"About","uri":"/about/#version-control"},{"categories":null,"collections":null,"content":"Other ","date":"29-05-2023","objectID":"/about/:3:1","tags":null,"title":"About","uri":"/about/#other"},{"categories":["DevOps"],"collections":null,"content":"In this article, we will explore how to set up a powerful and secure web server environment using Nginx as a reverse proxy with Let\u0026rsquo;s Encrypt SSL certificates and Fail2ban for enhanced security. We\u0026rsquo;ll leverage Docker Compose to simplify the deployment process and enable easy management of our services. By the end of this guide, you\u0026rsquo;ll have a robust setup that includes SSL encryption and protection against malicious actors. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:0:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#"},{"categories":["DevOps"],"collections":null,"content":"1. Prerequisites Before starting, ensure that you have a server or virtual machine running a supported operating system. Additionally, make sure your domain name is properly configured and pointing to your server\u0026rsquo;s IP address. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:1:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#1-prerequisites"},{"categories":["DevOps"],"collections":null,"content":"2. Install Docker To begin, install Docker on your server. Follow the official Docker documentation for installation instructions specific to your operating system. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:2:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#2-install-docker"},{"categories":["DevOps"],"collections":null,"content":"3. Configure Docker Compose Create a new file named docker-compose.yml and define the services required for Nginx, Certbot, and Fail2ban. Configure the necessary volumes and networks in the Docker Compose file. version: \u0026#39;3\u0026#39; services: nginx: image: nginx restart: always ports: - 80:80 - 443:443 volumes: - ./conf.d:/etc/nginx/conf.d - /etc/letsencrypt:/etc/letsencrypt networks: - nginx-network certbot: image: certbot/certbot volumes: - /etc/letsencrypt:/etc/letsencrypt - /var/lib/letsencrypt:/var/lib/letsencrypt command: certonly --webroot --webroot-path=/var/www/certbot --agree-tos --email your-email@example.com -d your-domain1.com -d your-domain2.com depends_on: - nginx networks: - nginx-network fail2ban: image: fail2ban/fail2ban restart: always volumes: - ./fail2ban:/etc/fail2ban - /var/log/nginx:/var/log/nginx networks: - nginx-network networks: nginx-network: ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:3:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#3-configure-docker-compose"},{"categories":["DevOps"],"collections":null,"content":"4. Obtain SSL Certificates with Let\u0026rsquo;s Encrypt Utilize the Certbot Docker image to obtain SSL certificates from Let\u0026rsquo;s Encrypt. Configure the Certbot service in the Docker Compose file to obtain the certificates for your domains. Certbot will use the standalone mode for certificate retrieval. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:4:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#4-obtain-ssl-certificates-with-lets-encrypt"},{"categories":["DevOps"],"collections":null,"content":"5. Set Up Nginx as a Reverse Proxy Create an Nginx configuration file named nginx.conf to define the reverse proxy settings. Customize the configuration to match your domains and backend services. This file will be mounted in the Nginx container to apply the reverse proxy rules. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:5:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#5-set-up-nginx-as-a-reverse-proxy"},{"categories":["DevOps"],"collections":null,"content":"6. Enhance Security with Fail2ban Integrate Fail2ban into the Docker Compose setup to add an extra layer of security. Create a Fail2ban configuration file named jail.local inside a dedicated directory. Configure Fail2ban to monitor the Nginx access logs and block malicious IP addresses. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:6:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#6-enhance-security-with-fail2ban"},{"categories":["DevOps"],"collections":null,"content":"7. Testing and Verification After running the Docker Compose command, the Nginx, Certbot, and Fail2ban containers will be up and running. Test the setup by accessing your domains over HTTPS and check the Fail2ban logs for any banned IP addresses. ","date":"29-05-2023","objectID":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/:7:0","tags":["nginx"],"title":"Setting Up Nginx as a Reverse Proxy with Let's Encrypt and Fail2ban Using Docker Compose","uri":"/posts/devops/setting-up-nginx-as-a-reverse-proxy-with-lets-encrypt-and-fail2ban-using-docker-compose/#7-testing-and-verification"},{"categories":["DevOps"],"collections":null,"content":"Deploying a React application involves building it on one server and deploying it to another. GitLab CI provides a robust solution for automating this process, allowing you to leverage different runners for the build and deployment stages. In this article, we will explore how to utilize GitLab CI to build a React application on a build server and deploy the built source to the Nginx HTML folder on a production server using separate runners. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:0:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#"},{"categories":["DevOps"],"collections":null,"content":"Prerequisites A GitLab repository containing the React application code. A build server with a GitLab CI runner configured. A production server with an Nginx server configured. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:1:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#prerequisites"},{"categories":["DevOps"],"collections":null,"content":"Step 1: Configuring the Build Server Set up a build server with Node.js and the GitLab CI runner installed. Ensure the runner is properly registered with GitLab. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:2:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#step-1-configuring-the-build-server"},{"categories":["DevOps"],"collections":null,"content":"Step 2: Creating the .gitlab-ci.yml File In the root directory of your GitLab repository, create a file named .gitlab-ci.yml to define the CI/CD pipeline. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:3:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#step-2-creating-the-gitlab-ciyml-file"},{"categories":["DevOps"],"collections":null,"content":"Step 3: Defining the Stages and Build Job Inside .gitlab-ci.yml, define two stages: build and deploy. The build stage will execute on the build runner and handle the React application build process. stages: - build - deploy build: stage: build tags: - build-runner script: - npm install - npm run build artifacts: paths: - build/ deploy: stage: deploy tags: - production-runner script: - cp -R build/* /usr/share/nginx/html/ only: - master The build stage is assigned the build-runner tag to ensure it runs on the designated build server. It installs the project dependencies using npm install and builds the React application using npm run build. The resulting build artifacts are saved in the build/ directory. The deploy stage is assigned the production-runner tag to ensure it runs on the designated production server. It uses the cp command to copy the build artifacts from the build/ directory to the Nginx HTML folder (/usr/share/nginx/html/). The deployment only triggers when changes are pushed to the master branch. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:4:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#step-3-defining-the-stages-and-build-job"},{"categories":["DevOps"],"collections":null,"content":"Step 4: Registering and Configuring Runners Ensure that you have registered and configured the build runner on the build server with the build-runner tag, and the production runner on the production server with the production-runner tag. ","date":"26-05-2023","objectID":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/:5:0","tags":["react","gitlab","nginx"],"title":"Building and Deploying a React Application to Nginx HTML Folder using GitLab CI with Separate Runners","uri":"/posts/devops/building-and-deploying-a-react-application-to-nginx-html-folder-using-gitlab-ci-with-separate-runners/#step-4-registering-and-configuring-runners"},{"categories":["Development"],"collections":null,"content":"This article will delve further into the code and explain how to enable interactive input when executing the embedded Bash script. We\u0026rsquo;ll walk through the code and understand how to connect the Go program\u0026rsquo;s standard input to the command\u0026rsquo;s standard input, allowing interaction with the Bash script. ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:0:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#"},{"categories":["Development"],"collections":null,"content":"1. Overview of the Code The provided code is a Go program that executes an embedded Bash script. It imports necessary packages such as fmt, log, os, and os/exec to handle command execution, input/output, and error handling. package main import ( _ \u0026#34;bufio\u0026#34; _ \u0026#34;embed\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;os\u0026#34; \u0026#34;os/exec\u0026#34; \u0026#34;strings\u0026#34; ) //go:embed bash.sh var scriptContent string func main() { args := os.Args[1:] // Get the command-line arguments excluding the program name // Start a new shell session with the embedded script cmd := exec.Command(\u0026#34;/bin/bash\u0026#34;, \u0026#34;-c\u0026#34;, scriptContent) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr // Connect os.Stdin to the command\u0026#39;s stdin cmd.Stdin = os.Stdin // Pass the command-line arguments as environment variables to the command cmd.Env = append(os.Environ(), fmt.Sprintf(\u0026#34;ARGS=%s\u0026#34;, strings.Join(args, \u0026#34; \u0026#34;))) // Execute the command err := cmd.Run() if err != nil { log.Fatalf(\u0026#34;Failed to execute bash script: %v\u0026#34;, err) } fmt.Println(\u0026#34;Bash script executed successfully.\u0026#34;) } ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:1:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#1-overview-of-the-code"},{"categories":["Development"],"collections":null,"content":"2. Embedding the Bash Script The code leverages Go\u0026rsquo;s embed package to embed the content of the bash.sh file into a string variable named scriptContent. This ensures that the Bash script is included within the Go program during the build process. ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:2:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#2-embedding-the-bash-script"},{"categories":["Development"],"collections":null,"content":"3. Command Execution and Input/Output Handling The program creates a new command using exec.Command, specifying /bin/bash as the shell and scriptContent as the script to execute. It configures the standard output and error of the command to use the corresponding streams of the Go program (cmd.Stdout = os.Stdout and cmd.Stderr = os.Stderr). Additionally, the program connects the Go program\u0026rsquo;s standard input to the command\u0026rsquo;s standard input (cmd.Stdin = os.Stdin). ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:3:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#3-command-execution-and-inputoutput-handling"},{"categories":["Development"],"collections":null,"content":"4. Enabling Interactive Input To enable interactive input on the embedded Bash script, the program establishes a connection between the Go program\u0026rsquo;s standard input and the command\u0026rsquo;s standard input. This allows users to provide input interactively during script execution. By setting cmd.Stdin = os.Stdin, any input entered in the Go program\u0026rsquo;s command line will be passed to the embedded Bash script. ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:4:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#4-enabling-interactive-input"},{"categories":["Development"],"collections":null,"content":"5. Error Handling If there is an error during command execution, the program logs a fatal error message using log.Fatalf and terminates. This ensures that any issues encountered during script execution are clearly reported. ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:5:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#5-error-handling"},{"categories":["Development"],"collections":null,"content":"6. Conclusion In this article, we explored how to execute an embedded Bash script within a Go program and enable interactive input. By connecting the Go program\u0026rsquo;s standard input to the command\u0026rsquo;s standard input, we can interactively provide input during script execution. This capability enhances the flexibility and usefulness of executing embedded Bash scripts in Go applications. ","date":"26-05-2023","objectID":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/:6:0","tags":["go","bash"],"title":"Executing an Embedded Bash Script in Go with Interactive Input","uri":"/posts/development/executing-an-embedded-bash-script-in-go-with-interactive-input/#6-conclusion"},{"categories":["Development"],"collections":null,"content":"In this article, we will explore how to embed a Bash script within a Go program and execute it while passing command-line arguments to the embedded script. This approach allows us to package the script directly within the Go binary, eliminating the need for an external script file. We will leverage the os/exec package and the Go embed feature to achieve this. ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:0:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Basic understanding of Go programming language Familiarity with executing shell scripts ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:1:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Embedding the Bash Script To embed the Bash script in the Go program, we can use the Go embed feature. Place the Bash script file, named bash.sh in this example, alongside the Go source file. Annotate the script content using the //go:embed directive to indicate that it should be embedded in the binary: //go:embed bash.sh var scriptContent []byte ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:2:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#step-1-embedding-the-bash-script"},{"categories":["Development"],"collections":null,"content":"Step 2: Executing the Embedded Bash Script Now, let\u0026rsquo;s execute the embedded Bash script while passing command-line arguments. Here\u0026rsquo;s the updated code: package main import ( _ \u0026#34;embed\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;log\u0026#34; \u0026#34;os\u0026#34; \u0026#34;os/exec\u0026#34; \u0026#34;strings\u0026#34; ) //go:embed bash.sh var scriptContent []byte func main() { args := os.Args[1:] // Get the command-line arguments excluding the program name cmd := exec.Command(\u0026#34;/bin/bash\u0026#34;, \u0026#34;-s\u0026#34;) // Execute the script directly, without -c flag cmd.Stdin = strings.NewReader(string(scriptContent)) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Args = append(cmd.Args, args...) // Append the command-line arguments to the command err := cmd.Run() if err != nil { log.Fatalf(\u0026#34;Failed to execute bash script: %v\u0026#34;, err) } fmt.Println(\u0026#34;Bash script executed successfully.\u0026#34;) } ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:3:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#step-2-executing-the-embedded-bash-script"},{"categories":["Development"],"collections":null,"content":"Explanation We retrieve the command-line arguments using os.Args[1:], excluding the program name. The exec.Command function is used to create the command to execute the Bash script. We pass /bin/bash as the command and -s to indicate that the script will be provided via stdin. We set cmd.Stdin to a strings.Reader that contains the embedded script content. This provides the script input to the bash command. The cmd.Stdout and cmd.Stderr fields are set to the respective os.Stdout and os.Stderr to capture the script\u0026rsquo;s output and error messages. We append the command-line arguments to cmd.Args to pass them as separate arguments to the command. Finally, we execute the command using cmd.Run() and handle any errors that may occur. ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:4:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#explanation"},{"categories":["Development"],"collections":null,"content":"Conclusion By following the steps outlined in this article, you can embed a Bash script within a Go program and execute it while passing command-line arguments. This approach eliminates the need for an external script file and allows you to distribute a single binary. You can further customize the code to suit your specific needs, such as adding additional error handling or modifying the script content. Remember to adjust any relative paths used in the script if the working directory differs from the original script\u0026rsquo;s directory. Experiment with embedding different Bash scripts and explore the possibilities of integrating Bash functionality directly within your Go applications. ","date":"25-05-2023","objectID":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/:5:0","tags":["go","bash"],"title":"Running an Embedded Bash Script with Command-Line Arguments in Go","uri":"/posts/development/running-an-embedded-bash-script-with-command-line-arguments-in-go/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"In software development, comments serve as a valuable tool for code documentation, explanations, and annotations. However, there are situations where it may be necessary or desirable to remove all comments, including legal ones, from your Vite builds. In this article, we will explore how to achieve this by utilizing the legalComments option in Vite\u0026rsquo;s esbuild configuration. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:0:0","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#"},{"categories":["Productivity"],"collections":null,"content":"Understanding the legalComments Option Vite, a fast and efficient web development build tool, incorporates the esbuild plugin to optimize JavaScript and TypeScript code. One of the available options for esbuild is legalComments, which allows developers to control the inclusion of legal comments in the final build output. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:1:0","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#understanding-the-legalcomments-option"},{"categories":["Productivity"],"collections":null,"content":"Configuring Vite to Remove All Comments To remove all comments, including legal ones, from your Vite builds, follow these steps: ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:0","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#configuring-vite-to-remove-all-comments"},{"categories":["Productivity"],"collections":null,"content":"Step 1: Locate the Configuration File Locate the configuration file for your Vite project. This file is commonly named vite.config.js and resides in the root directory of your project. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:1","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#step-1-locate-the-configuration-file"},{"categories":["Productivity"],"collections":null,"content":"Step 2: Open the Configuration File Open the vite.config.js file using a text editor or an integrated development environment (IDE) of your choice. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:2","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#step-2-open-the-configuration-file"},{"categories":["Productivity"],"collections":null,"content":"Step 3: Modify the esbuild Configuration Within the configuration file, locate the defineConfig function. This function allows you to define the configuration options for your Vite project. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:3","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#step-3-modify-the-esbuild-configuration"},{"categories":["Productivity"],"collections":null,"content":"Step 4: Update the esbuild Configuration Inside the defineConfig function, find the esbuild property. If it doesn\u0026rsquo;t exist, create it as an object. Within the esbuild object, add the legalComments option and set its value to 'none'. This configuration tells Vite\u0026rsquo;s esbuild plugin to exclude all legal comments from the build output. The updated code should look like this: export default defineConfig(() =\u0026gt; ({ esbuild: { legalComments: \u0026#39;none\u0026#39;, }, // Other configuration options... })); ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:4","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#step-4-update-the-esbuild-configuration"},{"categories":["Productivity"],"collections":null,"content":"Step 5: Save the Configuration File Save the vite.config.js file to ensure that your changes are preserved. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:2:5","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#step-5-save-the-configuration-file"},{"categories":["Productivity"],"collections":null,"content":"Conclusion By leveraging the legalComments option in Vite\u0026rsquo;s esbuild configuration, you can effortlessly remove all comments, including legal ones, from your Vite builds. This simplifies the codebase, reduces unnecessary information in the final output, and enhances the performance of your web application. However, it\u0026rsquo;s crucial to consider the legal implications before removing legal comments. Ensure that you understand the legal requirements and consult with legal professionals if necessary. Striking a balance between compliance and code optimization will enable you to achieve a streamlined development process with clean and efficient Vite builds. ","date":"25-05-2023","objectID":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/:3:0","tags":["javascript","vite","nodejs"],"title":"Removing All Comments, Including Legal Ones on Vite Build","uri":"/posts/productivity/removing-all-comments-including-legal-ones-on-vite-build/#conclusion"},{"categories":["Development"],"collections":null,"content":"Obfuscation is a technique used to obscure code and make it harder for others to understand or reverse-engineer. If you\u0026rsquo;re using Vite as your build tool for a JavaScript project and want to enable obfuscation, this article will guide you through the process. Additionally, we\u0026rsquo;ll explore how to enhance the security of your obfuscated code by configuring HTTP headers appropriately. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:0","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Required Packages Step 1: Install Required Packages Start by navigating to your project directory and opening the terminal. Then, run the following command to install the required dev dependencies: npm install --save-dev javascript-obfuscator rollup-plugin-obfuscator This command will install the javascript-obfuscator package, which is responsible for obfuscating your code, and the rollup-plugin-obfuscator package, which integrates obfuscation into the Vite build process. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:1","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#step-1-install-required-packages"},{"categories":["Development"],"collections":null,"content":"Step 2: Configure Vite Next, you need to configure Vite to use the obfuscation plugin. Open your vite.config.ts file, and inside the plugins array, add the obfuscator plugin. import { defineConfig } from \u0026#39;vite\u0026#39;; import react from \u0026#39;@vitejs/plugin-react\u0026#39;; import eslint from \u0026#39;vite-plugin-eslint\u0026#39;; import obfuscator from \u0026#39;rollup-plugin-obfuscator\u0026#39;; export default defineConfig({ plugins: [ react(), eslint(), obfuscator({ // Optional configuration options global: true, }), ], }); In the above code snippet, we import the rollup-plugin-obfuscator package and add it as a plugin in the Vite configuration. We pass an optional configuration object to the obfuscator function. In this case, we set global to true, indicating that the obfuscation should be applied globally to the entire project. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:2","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#step-2-configure-vite"},{"categories":["Development"],"collections":null,"content":"Step 3: Build and Obfuscate With the configuration in place, you\u0026rsquo;re ready to build your project and obfuscate the code. Run the following command in your terminal: npm run build Vite will start the build process, and the obfuscation plugin will automatically obfuscate your JavaScript code during the build. The obfuscated output will be placed in the dist or build directory, depending on your project setup. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:3","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#step-3-build-and-obfuscate"},{"categories":["Development"],"collections":null,"content":"Step 4: Configure HTTP Headers To enhance the security of your obfuscated code, it\u0026rsquo;s recommended to configure HTTP headers appropriately. This helps protect against certain types of attacks and adds an extra layer of defense. To enable certain headers, you can use a server configuration file, such as .htaccess for Apache servers or web.config for IIS servers. Here are some common headers to consider: Content-Security-Policy (CSP): Specifies the allowed sources for various types of content, such as scripts, stylesheets, images, etc. X-Content-Type-Options: Prevents the browser from trying to guess the content type and enforces the declared content type. X-Frame-Options: Prevents your pages from being loaded inside an iframe on other websites. X-XSS-Protection: Enables the browser\u0026rsquo;s XSS protection filter. Strict-Transport-Security (HSTS): Enforces the use of HTTPS for future requests. Ensure that you choose appropriate values for these headers based on your application\u0026rsquo;s requirements and security considerations. Refer to the documentation for your specific server and framework for detailed instructions on setting up these headers. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:4","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#step-4-configure-http-headers"},{"categories":["Development"],"collections":null,"content":"Conclusion Enabling obfuscation in your Vite build can add an extra layer of protection to your JavaScript code, making it more challenging for others to understand or modify. By following the steps outlined in this article, you should now have obfuscated code ready for deployment. Remember to configure appropriate HTTP headers to further enhance the security of your application. Keep in mind that while obfuscation can make it harder for others to reverse-engineer your code, it does not provide absolute security. ","date":"25-05-2023","objectID":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/:0:5","tags":["vite","javascript","nodejs"],"title":"How to Enable Obfuscation on Vite Build with HTTP Headers","uri":"/posts/development/how-to-enable-obfuscation-on-vite-build-with-http-headers/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"Efficient project management is crucial for successful software development, and accurately estimating the size and duration of GitHub issues is a key component. By categorizing issues based on their size and estimating their duration using the Pomodoro Technique, development teams can effectively plan, prioritize, and allocate resources. This article explores a framework for categorizing GitHub issues into different sizes and provides guidelines for estimating their durations using Pomodoros. ","date":"24-05-2023","objectID":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/:0:0","tags":["github"],"title":"Categorizing GitHub Issues by Size and Pomodoro Estimation","uri":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/#"},{"categories":["Productivity"],"collections":null,"content":"Categorizing Issue Sizes Issue Size Description Pomodoro Estimate Tiny Small and straightforward task 1 Pomodoro (25 minutes) Small Requires moderate effort and is relatively simple 2-4 Pomodoros (50 minutes to 2 hours) Medium Moderate effort, non-trivial task 4-8 Pomodoros (2 to 4 hours) Large Substantial effort, significant work 8-16 Pomodoros (4 to 8 hours) X-Large Major undertaking, complex features or systems More than 16 Pomodoros (8+ hours) To establish a common understanding within the development team, let\u0026rsquo;s use the following framework for categorizing GitHub issue sizes: Tiny: Tiny issues are small and straightforward tasks that can be completed quickly. They often involve minor bug fixes, small documentation updates, or simple code refactoring. A tiny issue can typically be completed within 1 Pomodoro (25 minutes). Small: Small issues require moderate effort but are relatively simple. They may involve implementing a new feature with minimal complexity, fixing a small bug, or making minor improvements to existing code. A small issue usually takes 2-4 Pomodoros (50 minutes to 2 hours) to complete. Medium: Medium-sized issues require moderate effort and are non-trivial tasks. They often involve implementing new features with moderate complexity, resolving non-trivial bugs, or making significant improvements to existing functionality. A medium issue generally takes 4-8 Pomodoros (2 to 4 hours). Large: Large issues involve substantial effort and represent significant work. They may require substantial design and development efforts, such as implementing complex features, refactoring large sections of code, or resolving challenging bugs. A large issue usually takes 8-16 Pomodoros (4 to 8 hours). X-Large: X-Large issues are major undertakings that require a significant amount of effort and time. They often involve complex features or systems, major architectural changes, or resolving critical and highly complex bugs. An X-Large issue may require more than 16 Pomodoros (8+ hours) to complete. ","date":"24-05-2023","objectID":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/:1:0","tags":["github"],"title":"Categorizing GitHub Issues by Size and Pomodoro Estimation","uri":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/#categorizing-issue-sizes"},{"categories":["Productivity"],"collections":null,"content":"Pomodoro Estimation The Pomodoro Technique is a time management method that breaks work into intervals called \u0026ldquo;Pomodoros,\u0026rdquo; typically lasting 25 minutes, followed by a short break. By estimating the number of Pomodoros required for each issue, you can gain insights into their duration. However, note that these estimations can vary based on individual factors, team dynamics, and project complexity. ","date":"24-05-2023","objectID":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/:2:0","tags":["github"],"title":"Categorizing GitHub Issues by Size and Pomodoro Estimation","uri":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/#pomodoro-estimation"},{"categories":["Productivity"],"collections":null,"content":"Conclusion Categorizing GitHub issues by size and estimating their duration using the Pomodoro Technique provides a valuable approach to project management in software development. By utilizing this framework, development teams can effectively plan, prioritize, and allocate resources. Remember to adapt these categorizations and Pomodoro estimations to your project\u0026rsquo;s specific context and your team\u0026rsquo;s capabilities. By combining consistent categorization with Pomodoro-based duration estimations, development teams can enhance their productivity, improve project outcomes, and optimize their workflow. ","date":"24-05-2023","objectID":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/:3:0","tags":["github"],"title":"Categorizing GitHub Issues by Size and Pomodoro Estimation","uri":"/posts/productivity/categorizing-github-issues-by-size-and-pomodoro-estimation/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"In our busy lives, it\u0026rsquo;s crucial to prioritize tasks effectively to ensure productivity and efficiency. One popular method for task management is the Eisenhower Priority Matrix, which helps you categorize tasks based on their urgency and importance. If you use Apple Reminders as your task management tool, you can easily implement the Eisenhower Priority Matrix within the app. In this article, we\u0026rsquo;ll guide you through the process of setting up the matrix using Apple Reminders\u0026rsquo; default priority levels. ","date":"24-05-2023","objectID":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/:0:0","tags":[""],"title":"Organize Your Tasks Effectively with the Eisenhower Priority Matrix in Apple Reminders","uri":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/#"},{"categories":["Productivity"],"collections":null,"content":"The Eisenhower Priority Matrix Reminder Priority Eisenhower Priority Matrix High Urgent and Important Medium Important, but Not Urgent Medium Urgent, but Not Important Low Not Urgent and Not Important You can add the appropriate due dates to each task as per your requirements. This table provides a clear overview of the tasks within the Eisenhower Priority Matrix, with their respective priorities and due dates for better organization and management of your reminders. The Eisenhower Priority Matrix is a four-quadrant system that classifies tasks into four categories based on two factors: urgency and importance. The quadrants are as follows: Urgent and Important (High Priority): Tasks falling under this category require immediate attention as they are both urgent and important. These tasks typically have a significant impact on your goals or have impending deadlines. Important, but Not Urgent (Medium Priority): Tasks in this category are important for your long-term goals but may not require immediate action. They are valuable but can be scheduled for a later time. Urgent, but Not Important (Medium Priority): These tasks are urgent and demand your attention, but they may not contribute significantly to your long-term objectives. They often involve time-sensitive requests from others or minor issues that need prompt resolution. Not Urgent and Not Important (Low Priority): Tasks falling into this category are neither urgent nor important. They are usually time-wasters or activities that don\u0026rsquo;t align with your goals. Consider delegating or eliminating these tasks whenever possible. ","date":"24-05-2023","objectID":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/:1:0","tags":[""],"title":"Organize Your Tasks Effectively with the Eisenhower Priority Matrix in Apple Reminders","uri":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/#the-eisenhower-priority-matrix"},{"categories":["Productivity"],"collections":null,"content":"Implementing the Eisenhower Priority Matrix in Apple Reminders To integrate the Eisenhower Priority Matrix into Apple Reminders, follow these steps: Open the Apple Reminders app on your device. Create a new reminder list specifically for the Eisenhower Priority Matrix. You can either add a new list or use an existing one. Within the Eisenhower Priority Matrix list, create four separate reminder tasks using the appropriate labels mentioned above. For example: Urgent and Important (High Priority) Important, but Not Urgent (Medium Priority) Urgent, but Not Important (Medium Priority) Not Urgent and Not Important (Low Priority) Assign the corresponding priority level to each reminder task. In Apple Reminders, you can set the priority level to high, medium, or low. Optionally, you can add due dates or additional details to each task, depending on your requirements. ","date":"24-05-2023","objectID":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/:2:0","tags":[""],"title":"Organize Your Tasks Effectively with the Eisenhower Priority Matrix in Apple Reminders","uri":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/#implementing-the-eisenhower-priority-matrix-in-apple-reminders"},{"categories":["Productivity"],"collections":null,"content":"Conclusion By utilizing the Eisenhower Priority Matrix within Apple Reminders, you can effectively organize and manage your tasks based on their urgency and importance. This method allows you to prioritize tasks, focus on what matters most, and avoid getting overwhelmed by nonessential activities. With Apple Reminders\u0026rsquo; intuitive interface and default priority levels, implementing the Eisenhower Priority Matrix becomes a seamless process, enabling you to achieve greater productivity and stay on top of your responsibilities. ","date":"24-05-2023","objectID":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/:3:0","tags":[""],"title":"Organize Your Tasks Effectively with the Eisenhower Priority Matrix in Apple Reminders","uri":"/posts/productivity/organize-your-tasks-effectively-with-the-eisenhower-priority-matrix-in-apple-reminders/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"The Eisenhower Priority Matrix is a popular tool for organizing tasks based on their importance and urgency. By categorizing tasks into four quadrants, it provides a visual representation of priorities. However, when it comes to managing tasks in a GitHub repository, it is useful to have a clear mapping of these priorities into default labels. In this article, we will explore how to translate the Eisenhower Priority Matrix into GitHub priority labels to effectively manage and prioritize tasks. ","date":"24-05-2023","objectID":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/:0:0","tags":["github"],"title":"Translating the Eisenhower Priority Matrix into GitHub Priorities","uri":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/#"},{"categories":["Productivity"],"collections":null,"content":"Mapping Priorities Eisenhower Priority Matrix GitHub Priority Labels Important and Urgent Urgent Important but Not Urgent High Not Important but Urgent Medium Not Important and Not Urgent Low To map the Eisenhower Priority Matrix to GitHub priorities, we will assign each quadrant of the matrix to a specific label. The default GitHub labels we will use are low, medium, high, and urgent. Let\u0026rsquo;s examine how each quadrant is mapped: Important and Urgent This quadrant represents tasks that are both important and require immediate attention. To reflect this urgency, we will assign the GitHub priority label \u0026ldquo;Urgent\u0026rdquo; to this category. These tasks should be given the highest priority in your GitHub repository. Important but Not Urgent This quadrant represents tasks that are important but don\u0026rsquo;t require immediate attention. These tasks have long-term significance and should not be overlooked. To reflect their importance, we will assign the GitHub priority label \u0026ldquo;High\u0026rdquo; to this category. These tasks should be prioritized after the urgent tasks. Not Important but Urgent This quadrant represents tasks that require immediate attention but are not necessarily important. These tasks often involve interruptions or unexpected issues that need to be resolved promptly. To reflect their urgency without overemphasizing their importance, we will assign the GitHub priority label \u0026ldquo;Medium\u0026rdquo; to this category. These tasks should be addressed after the high and urgent tasks. Not Important and Not Urgent This quadrant represents tasks that are neither important nor require immediate attention. These tasks can be seen as distractions or low-priority items. To reflect their low importance and lack of urgency, we will assign the GitHub priority label \u0026ldquo;Low\u0026rdquo; to this category. These tasks should be given the lowest priority in your GitHub repository. ","date":"24-05-2023","objectID":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/:1:0","tags":["github"],"title":"Translating the Eisenhower Priority Matrix into GitHub Priorities","uri":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/#mapping-priorities"},{"categories":["Productivity"],"collections":null,"content":"Implementing the Mapping To apply this mapping in your GitHub repository, you can create or modify the existing labels to match the assigned priorities. By consistently labeling tasks based on their importance and urgency, you can easily visualize and prioritize your work. ","date":"24-05-2023","objectID":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/:2:0","tags":["github"],"title":"Translating the Eisenhower Priority Matrix into GitHub Priorities","uri":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/#implementing-the-mapping"},{"categories":["Productivity"],"collections":null,"content":"Conclusion Translating the Eisenhower Priority Matrix into GitHub priorities can significantly enhance task management and organization within your repository. By assigning the default labels of low, medium, high, and urgent to each quadrant, you can effectively categorize and prioritize tasks. Remember to adjust and customize these labels based on your specific project requirements. Utilize this mapping to streamline your workflow, improve productivity, and ensure that important tasks are given the appropriate attention they deserve. ","date":"24-05-2023","objectID":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/:3:0","tags":["github"],"title":"Translating the Eisenhower Priority Matrix into GitHub Priorities","uri":"/posts/productivity/translating-the-eisenhower-priority-matrix-into-github-priorities/#conclusion"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Tmux is a powerful terminal multiplexer that allows users to manage multiple sessions, windows, and panes. However, some users have reported an issue where tmux generates random characters when clicking the mouse. In this article, we will explore the cause of this problem and provide steps to resolve it. ","date":"24-05-2023","objectID":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/:0:0","tags":["tmux"],"title":"Resolving the Issue of Random Character Generation When Clicking the Mouse in tmux","uri":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Troubleshooting Steps Terminal Emulator Settings: Verify that your terminal emulator\u0026rsquo;s mouse settings are properly configured. Enable mouse reporting and ensure it is set to the appropriate mode based on your terminal emulator\u0026rsquo;s documentation. Update Terminal Emulator: Ensure that you are using the latest version of your terminal emulator. Older versions may have bugs or compatibility issues that have been resolved in newer releases. Try a Different Terminal Emulator: Test the issue with an alternative terminal emulator to determine if it is specific to your current one. Disable Custom Mouse Configurations: Temporarily disable any custom mouse settings, scripts, or plugins you have configured to check if they are causing the unexpected behavior. Reset tmux Configuration: Rename or move your existing ~/.tmux.conf file to start fresh with default settings. Launch a new tmux session to see if the issue persists without any custom configurations. ","date":"24-05-2023","objectID":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/:1:0","tags":["tmux"],"title":"Resolving the Issue of Random Character Generation When Clicking the Mouse in tmux","uri":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/#troubleshooting-steps"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Solution To stop generating random characters when clicking the mouse in tmux, follow these steps: Open your terminal application (e.g., Terminal on macOS, Command Prompt on Windows, or any other terminal emulator). Type the command reset and press Enter. The terminal will be reset, restoring the default settings. You can now use tmux or any other command-line application without experiencing the issue of random character generation. Please note that using the reset command will reset your terminal settings to their default values, so any customizations you may have made will be lost. ","date":"24-05-2023","objectID":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/:2:0","tags":["tmux"],"title":"Resolving the Issue of Random Character Generation When Clicking the Mouse in tmux","uri":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/#solution"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Conclusion In conclusion, if you are encountering the issue of generating random characters when clicking the mouse in tmux, try resetting your terminal settings using the reset command. This will restore the default values and can help resolve any inconsistencies or conflicts causing the problem. Additionally, make sure to keep your terminal emulator up to date and check for any custom mouse configurations that may be affecting the behavior. By following these steps, you can alleviate the issue and restore normal functionality in tmux. ","date":"24-05-2023","objectID":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/:3:0","tags":["tmux"],"title":"Resolving the Issue of Random Character Generation When Clicking the Mouse in tmux","uri":"/posts/software/resolving-the-issue-of-random-character-generation-when-clicking-the-mouse-in-tmux/#conclusion"},{"categories":["Devops"],"collections":null,"content":"In this article, we will explore how to configure an Apache web server running in a Docker Compose environment with custom log settings and size limitation specifically for virtual host logs. We\u0026rsquo;ll cover the steps to set up the necessary volume mounts, configuration files, and log rotation to ensure that virtual host logs are limited in size. This will help you effectively manage disk space and maintain clean and manageable log files for your Apache server. ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:0:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#"},{"categories":["Devops"],"collections":null,"content":"Prerequisites Basic knowledge of Docker and Docker Compose Familiarity with Apache web server and its configuration ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:1:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#prerequisites"},{"categories":["Devops"],"collections":null,"content":"Step 1: Setting up the Docker Compose Environment Create a new directory for your Docker project. Create a docker-compose.yml file and define the Apache service using the desired base image. Configure the necessary port mappings for the Apache container. version: \u0026#39;3\u0026#39; services: apache: restart: always build: context: . dockerfile: Dockerfile ports: - 80:80 - 443:443 ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:2:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#step-1-setting-up-the-docker-compose-environment"},{"categories":["Devops"],"collections":null,"content":"Step 2: Configuring Custom Log Settings and Size Limitation Create the required directories on the host machine for storing log files, if they don\u0026rsquo;t already exist. Modify the docker-compose.yml file to include the appropriate volume mounts for log files and configuration files: Mount the custom log configuration file, such as other-vhosts-access-log.conf, to the appropriate location inside the container (e.g., /etc/apache2/conf-available/). version: \u0026#39;3\u0026#39; services: apache: restart: always build: context: . dockerfile: Dockerfile ports: - 80:80 - 443:443 volumes: - ./conf-available/other-vhosts-access-log.conf:/etc/apache2/conf-available/other-vhosts-access-log.conf Open the custom log configuration file (other-vhosts-access-log.conf) and add the following directives to limit the size of virtual host logs using log rotation: CustomLog \u0026#34;|/path/to/rotatelogs -n 10 /var/log/apache2/other_vhosts_access.log 1M\u0026#34; vhost_combined Adjust the /path/to/rotatelogs to the actual path of the Rotatelogs program on your system. The -n 10 flag specifies that a maximum of 10 log files should be kept. The 1M argument specifies that each log file should have a maximum size of 1 megabyte. ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:3:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#step-2-configuring-custom-log-settings-and-size-limitation"},{"categories":["Devops"],"collections":null,"content":"Step 3: Starting the Apache Container Run the docker-compose up -d command to start the Apache container. Apache will use the custom log configuration specified in the mounted configuration file, and virtual host log files will be limited in size according to the log rotation settings. ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:4:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#step-3-starting-the-apache-container"},{"categories":["Devops"],"collections":null,"content":"Conclusion By following the steps outlined in this article, you can configure an Apache web server running in a Docker Compose environment to use custom log settings with size limitation specifically for virtual host logs. This allows you to effectively manage disk space usage and maintain clean log files for your Apache server. With size-limited virtual host logs, you can ensure efficient log management and prevent log files from growing indefinitely. Note: Remember to adjust the file paths, directory names, and configuration files according to your specific setup. Additionally, monitor the log rotation and adjust the size limitation parameters based on your requirements and available disk space. Feel free to provide further explanations, expand on each step with additional details, and include any necessary code snippets or screenshots to make the blog post more comprehensive. ","date":"21-05-2023","objectID":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/:5:0","tags":["apache","docker"],"title":"Configuring Apache in Docker Compose with Custom Log Settings and Size Limitation for Virtual Hosts","uri":"/posts/devops/configuring-apache-in-docker-compose-with-custom-log-settings-and-size-limitation-for-virtual-hosts/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"GitLab CI/CD provides powerful capabilities for automating the deployment of applications. Docker Compose is a popular tool for defining and managing multi-container Docker applications. In this blog post, we\u0026rsquo;ll explore how to deploy a Docker Compose production YAML file on GitLab CI/CD, even if the runner server and production server are not in the same location. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:0:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#"},{"categories":["DevOps"],"collections":null,"content":"Prerequisites Understanding of Docker Compose and GitLab CI/CD concepts. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:1:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#prerequisites"},{"categories":["DevOps"],"collections":null,"content":"Step 1: Setting up the Runner To begin, set up a dedicated runner on the runner server. This runner will handle the execution of the deployment pipeline. Ensure that Docker is installed and properly configured on the runner server. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:2:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#step-1-setting-up-the-runner"},{"categories":["DevOps"],"collections":null,"content":"Step 2: Preparing the Docker Compose Production YAML Modify your Docker Compose production YAML file to suit your deployment requirements. Let\u0026rsquo;s assume you have the following Docker Compose production YAML: version: \u0026#34;3\u0026#34; services: web: image: registry.example.com/my-app:latest ports: - 80:80 In this example, the web service pulls the pre-built image registry.example.com/my-app:latest from a container registry. Adjust the image and any other services or configurations according to your application needs. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:3:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#step-2-preparing-the-docker-compose-production-yaml"},{"categories":["DevOps"],"collections":null,"content":"Step 3: Transferring Files to the Production Server In your .gitlab-ci.yml file, add a step to transfer the deployment package from the runner server to the production server using a secure file transfer protocol like SSH or SCP. Here\u0026rsquo;s an example configuration: deploy: stage: deploy tags: - runner script: - scp docker-compose-production.yml user@production-server:/path/on/production/ In this example, the scp command transfers the Docker Compose production YAML file (docker-compose-production.yml) from the runner server to the production server at the specified destination path (/path/on/production/). ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:4:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#step-3-transferring-files-to-the-production-server"},{"categories":["DevOps"],"collections":null,"content":"Step 4: Deployment Script on the Production Server On the production server, create a deployment script that handles the extraction of the deployment package and executes the necessary commands to deploy the Docker Compose services. Here\u0026rsquo;s an example deployment script: #!/bin/bash # Extract deployment package tar -xzvf /path/on/production/docker-compose-production.yml # Change to deployment directory cd /path/on/production # Run Docker Compose docker-compose up -d Customize the script to suit your specific setup and requirements. Ensure that the production server has Docker installed and properly configured. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:5:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#step-4-deployment-script-on-the-production-server"},{"categories":["DevOps"],"collections":null,"content":"Step 5: Configuring the Production Server Configure the production server to execute the deployment script when the deployment package is transferred. For example, you can use an SSH post-receive hook or a separate process that monitors the deployment directory for changes and triggers the deployment script accordingly. Ensure that you have appropriate security measures in place, such as secure file transfer protocols, proper access controls, and secure environment variables for any sensitive information required for the deployment process. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:6:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#step-5-configuring-the-production-server"},{"categories":["DevOps"],"collections":null,"content":"Conclusion Deploying Docker Compose production YAML files on GitLab CI/CD enables you to automate and streamline your application deployment process. By following the steps outlined in this blog post, you can deploy your application across multiple servers, even if the runner and production servers are in different locations. Remember to prioritize security by using secure file transfer protocols and implementing appropriate access controls. Separating the build process from the production server enhances efficiency and consistency. GitLab CI/CD and Docker Compose are powerful tools that, when combined, offer a robust solution for deploying and managing containerized applications. Experiment with different deployment strategies to find the approach that best fits your project\u0026rsquo;s requirements. ","date":"17-05-2023","objectID":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/:7:0","tags":["gitlab","docker"],"title":"Deploying Docker Compose Production YAML on GitLab CI/CD","uri":"/posts/devops/deploying-docker-compose-production-yaml-on-gitlab-cicd/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"In certain situations, such as when administering a remote server running Xubuntu, it can be useful to be able to lock the screen remotely. This can help ensure the security and privacy of the system, especially when multiple users have access to the server. In this blog post, we will walk you through the process of remotely locking the Xubuntu screen from SSH. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:0:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#"},{"categories":["DevOps"],"collections":null,"content":"Prerequisites To follow this guide, you\u0026rsquo;ll need the following: Access to a server running Xubuntu. SSH client installed on your local machine. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:1:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#prerequisites"},{"categories":["DevOps"],"collections":null,"content":"Step 1: Retrieving the DBUS Session Bus Address The first step is to open the Terminal app on your local machine and run the following command: echo $DBUS_SESSION_BUS_ADDRESS Make sure to note down the output of this command, as we will need it later in the process. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:2:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#step-1-retrieving-the-dbus-session-bus-address"},{"categories":["DevOps"],"collections":null,"content":"Step 2: Editing the ~/.bashrc File Next, we need to edit the ~/.bashrc file on the remote server. This file contains various configurations and settings for the Bash shell. Open the file using your preferred text editor, for example: nano ~/.bashrc Navigate to the end of the file and add the following line: export DBUS_SESSION_BUS_ADDRESS=[YOUR_ADDRESS] Replace [YOUR_ADDRESS] with the output you obtained in Step 1. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:3:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#step-2-editing-the-bashrc-file"},{"categories":["DevOps"],"collections":null,"content":"Step 3: Saving and Sourcing the ~/.bashrc File After adding the export line, save the ~/.bashrc file and exit the text editor. In nano, you can do this by pressing Ctrl+X, then Y to confirm the changes and Enter to save. To make the changes take effect, source the ~/.bashrc file using the following command: source ~/.bashrc This will ensure that the environment variable we added is available for subsequent SSH sessions. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:4:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#step-3-saving-and-sourcing-the-bashrc-file"},{"categories":["DevOps"],"collections":null,"content":"Step 4: Remotely Locking the Xubuntu Screen Now that we have set up the necessary environment variable, we can remotely lock the Xubuntu screen from your local machine. Use the SSH command with the xflock4 command as follows: ssh user@server xflock4 Replace \u0026ldquo;user\u0026rdquo; with your actual username and \u0026ldquo;server\u0026rdquo; with the IP address or hostname of the remote server. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:5:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#step-4-remotely-locking-the-xubuntu-screen"},{"categories":["DevOps"],"collections":null,"content":"Conclusion By following this step-by-step guide, you can now remotely lock the Xubuntu screen from an SSH session. This can be particularly useful when managing a server with multiple users, ensuring that the screen is locked when it is not in use. Keep in mind that this process assumes you have the necessary permissions and access to the remote server. ","date":"05-05-2023","objectID":"/posts/devops/lock-xubuntu-remotely-from-ssh/:6:0","tags":["linux"],"title":"Lock Xubuntu Remotely From SSH","uri":"/posts/devops/lock-xubuntu-remotely-from-ssh/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Cloning multiple GitHub repositories with different deployment keys can be useful when you need to access multiple repositories using different SSH keys associated with the same GitHub account. This guide provides step-by-step instructions on how to clone multiple repositories with different deployment keys while using the same username. By following these steps, you can streamline your workflow and manage multiple repositories more efficiently. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:0:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#"},{"categories":["DevOps"],"collections":null,"content":"Step 1: Generate Deployment Keys To begin, generate a deployment key for each repository you want to clone. Use the ssh-keygen command on your local machine to create a unique key for each repository. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:1:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#step-1-generate-deployment-keys"},{"categories":["DevOps"],"collections":null,"content":"Step 2: Add Deployment Keys to Your GitHub Account Next, navigate to the \u0026ldquo;Settings\u0026rdquo; page of your GitHub account. Access the \u0026ldquo;SSH and GPG keys\u0026rdquo; section and click on the \u0026ldquo;New SSH key\u0026rdquo; button for each key. Give each key a descriptive name and paste the contents of the respective public key file into the \u0026ldquo;Key\u0026rdquo; field. Save each key to your GitHub account by clicking \u0026ldquo;Add SSH key.\u0026rdquo; ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:2:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#step-2-add-deployment-keys-to-your-github-account"},{"categories":["DevOps"],"collections":null,"content":"Step 3: Create a Configuration File Using your terminal, navigate to the .ssh directory by running cd ~/.ssh/. If a configuration file doesn\u0026rsquo;t already exist, create one by running touch config. Open the configuration file with a text editor like nano using the command nano config. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:3:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#step-3-create-a-configuration-file"},{"categories":["DevOps"],"collections":null,"content":"Step 4: Configure the Configuration File In the configuration file, add an entry for each repository you wish to clone. For each entry, specify the path to the private key file, the username, and the repository URL. Use the format shown below as an example: Host github.com-repo1 HostName github.com User git IdentityFile ~/.ssh/repo1_key Host github.com-repo2 HostName github.com User git IdentityFile ~/.ssh/repo2_key Save the configuration file by pressing Ctrl + X, then Y, and finally Enter. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:4:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#step-4-configure-the-configuration-file"},{"categories":["DevOps"],"collections":null,"content":"Step 5: Clone Each Repository Navigate to the directory where you want to clone the repositories using your terminal. Run the following command for each repository, replacing username with your GitHub username and repository with the name of the repository you wish to clone: git clone git@github.com-repo1:username/repository.git git clone git@github.com-repo2:username/repository.git This command will clone each repository using the respective deployment key specified in the configuration file. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:5:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#step-5-clone-each-repository"},{"categories":["DevOps"],"collections":null,"content":"Conclusion By following these steps, you can successfully clone multiple GitHub repositories with different deployment keys but using the same username. This approach enables efficient management of multiple repositories within a single GitHub account. ","date":"05-05-2023","objectID":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/:6:0","tags":["github","git"],"title":"Clone Multiple Github Repositories With Different Deployment Keys But The Same Username","uri":"/posts/devops/clone-multiple-github-repositories-with-different-deployment-keys-but-the-same-username-copy/#conclusion"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re using Ubuntu as your operating system on a MacBook and have recently updated the kernel, you might encounter issues with your camera not working. This problem can be resolved by reinstalling the necessary drivers. We will guide you through the steps to fix the camera problem and get it up and running again. ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:0:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 1: Open Terminal To begin, open the Terminal on your MacBook. You can do this by searching for \u0026ldquo;Terminal\u0026rdquo; in the Applications menu or by using the shortcut Ctrl+Alt+T. ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:1:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-1-open-terminal"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 2: Change Directory In the Terminal, navigate to the \u0026lsquo;bcwc_pcie\u0026rsquo; directory. You can do this by entering the following command: cd bcwc_pcie ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:2:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-2-change-directory"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 3: Compile the Drivers Once you\u0026rsquo;re in the \u0026lsquo;bcwc_pcie\u0026rsquo; directory, compile the camera drivers by running the following command: make ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:3:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-3-compile-the-drivers"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 4: Install the Drivers After the compilation process is complete, install the drivers using the following command: sudo make install ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:4:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-4-install-the-drivers"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 5: Update Kernel Modules To update the kernel modules, enter the following commands one by one: sudo depmod sudo modprobe -r bdc_pci sudo modprobe facetimehd ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:5:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-5-update-kernel-modules"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 6: Configure Module Loading To ensure the \u0026lsquo;facetimehd\u0026rsquo; module loads automatically on startup, execute the following command: echo facetimehd | sudo tee -a /etc/modules \u0026gt; /dev/null ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:6:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#step-6-configure-module-loading"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Conclusion By following these steps, you should be able to resolve camera issues caused by a kernel update on your MacBook running Ubuntu. Reinstalling the camera drivers and updating the kernel modules will help restore the functionality of your camera. If you encounter any difficulties during the process, feel free to seek assistance from the Ubuntu community or consult the documentation. Remember to always keep your system and drivers up to date to avoid potential compatibility issues. ","date":"20-04-2023","objectID":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/:7:0","tags":["linux","mac"],"title":"Fixing Camera Issues After Kernel Update on MacBook with Ubuntu OS","uri":"/posts/software/fixing-camera-issues-after-kernel-update-on-macbook-with-ubuntu-os/#conclusion"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re working on a Mac and using Git, you might run into an issue where Git won\u0026rsquo;t replace a folder that has had a change in case. For example, if you have a folder called \u0026ldquo;Article\u0026rdquo; and you change it to \u0026ldquo;article\u0026rdquo;, Git might not recognize the change and won\u0026rsquo;t replace the old folder with the new one. Fortunately, there is a solution to this problem. You can configure Git to not ignore case sensitivity by running the following command: git config core.ignorecase false This will tell Git to consider changes in case when replacing files or folders. It\u0026rsquo;s important to note that changing this setting may cause issues if you\u0026rsquo;re working on a project with others who are using different operating systems. Windows, for example, is not case sensitive, so if you change this setting, it may cause issues for Windows users. ","date":"19-04-2023","objectID":"/posts/development/fixing-git-commit-not-replacing-folder-with-changed-case-on-mac/:0:0","tags":["git","mac"],"title":"Fixing Git Commit Not Replacing Folder with Changed Case on Mac","uri":"/posts/development/fixing-git-commit-not-replacing-folder-with-changed-case-on-mac/#"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Cryptomator is a popular open-source encryption software that allows users to secure their files and folders by creating encrypted vaults. However, like any software, Cryptomator may encounter occasional issues. One common problem on macOS is the inability to unlock a vault due to MacFuse hang. In this article, we will guide you through the process of reloading the MacFuse kernel extension to resolve this issue and regain access to your encrypted vault. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:0:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 1: Identify the Issue Before proceeding with the fix, it\u0026rsquo;s essential to confirm that the problem you\u0026rsquo;re experiencing is indeed related to MacFuse hang. If you encounter an error or Cryptomator becomes unresponsive while attempting to unlock a vault, it may indicate a MacFuse issue. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:1:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-1-identify-the-issue"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 2: Open Terminal To execute the necessary commands, you\u0026rsquo;ll need to use the Terminal application on your macOS system. You can find it in the \u0026ldquo;Applications\u0026rdquo; folder under the \u0026ldquo;Utilities\u0026rdquo; subfolder. Alternatively, you can use Spotlight (Cmd + Space) and search for \u0026ldquo;Terminal.\u0026rdquo; ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:2:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-2-open-terminal"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 3: Unload the MacFuse Kernel Extension In the Terminal window, enter the following command and press Enter: sudo kextunload -b io.macfuse.filesystems.macfuse You will be prompted to enter your administrator password. Type it in and press Enter. Note that when entering your password, no characters will be displayed on the screen, but the input is being registered. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:3:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-3-unload-the-macfuse-kernel-extension"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 4: Wait for Unloading Process Once you\u0026rsquo;ve entered the password, the system will start unloading the MacFuse kernel extension. This may take a few moments. Be patient and allow the process to complete. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:4:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-4-wait-for-unloading-process"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 5: Reload the MacFuse Kernel Extension After the unloading process finishes, it\u0026rsquo;s time to reload the MacFuse kernel extension. Enter the following command in the Terminal window and press Enter: sudo kextload -b io.macfuse.filesystems.macfuse Again, you\u0026rsquo;ll be prompted to enter your administrator password. Type it in and press Enter. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:5:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-5-reload-the-macfuse-kernel-extension"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 6: Confirm Successful Reload Once the command executes, the MacFuse kernel extension should be reloaded successfully. You should see relevant system messages indicating that the extension has been loaded. This indicates that the fix has been applied. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:6:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-6-confirm-successful-reload"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Step 7: Test Cryptomator Now that the MacFuse kernel extension has been reloaded, launch Cryptomator and attempt to unlock the vault that was previously causing issues. The problem with MacFuse hang should be resolved, and you should be able to access your encrypted files and folders without any further hindrance. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:7:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#step-7-test-cryptomator"},{"categories":["Troubleshooting","Software"],"collections":null,"content":"Conclusion If you encounter difficulties unlocking a Cryptomator vault on macOS due to MacFuse hang, the steps outlined above can help you resolve the issue. By unloading and reloading the MacFuse kernel extension using Terminal commands, you can restore functionality and regain access to your encrypted vault. Remember to follow the instructions carefully and ensure you have administrative privileges on your system to execute the commands. ","date":"27-11-2022","objectID":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/:8:0","tags":["mac"],"title":"Fixing Vault Unlock Issues on macOS Due to MacFuse Hang on Cryptomator","uri":"/posts/software/fixing-vault-unlock-issues-on-macos-due-to-macfuse-hang-on-cryptomator-/#conclusion"},{"categories":["Development"],"collections":null,"content":"Overview A hostname naming convention ensures consistency, clarity, and scalability in IT infrastructure. The naming scheme follows a structured format that includes the function, location, and serial number of a host. ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:1:0","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#overview"},{"categories":["Development"],"collections":null,"content":"Example Hostname Structure hcluster3-web1.sjc.sl.example.net","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:1:1","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#example-hostname-structure"},{"categories":["Development"],"collections":null,"content":"Breakdown of the Example hcluster3 – Cluster identifier web1 – Server type and serial number sjc – Location code (San Jose, CA) sl – Site-specific identifier example.net – Domain name ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:1:2","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#breakdown-of-the-example"},{"categories":["Development"],"collections":null,"content":"Naming Categories ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:2:0","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#naming-categories"},{"categories":["Development"],"collections":null,"content":"Primary Host Functions A hostname\u0026rsquo;s function is specified using a standard set of abbreviations. The serial number follows the function abbreviation. Function Description app Application Server (non-web) sql Database Server ftp SFTP Server mta Mail Server dns Name Server cfg Configuration Management (Puppet, Ansible, etc.) mon Monitoring Server (Nagios, Sensu, etc.) prx Proxy/Load Balancer (software) ssh SSH Jump/Bastion Host sto Storage Server vcs Version Control Software Server (Git, SVN, CVS, etc.) vmm Virtual Machine Manager web Web Server ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:2:1","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#primary-host-functions"},{"categories":["Development"],"collections":null,"content":"Special Device Naming Certain devices have unique designations to indicate their specialized function. Device Description con Console/Terminal Server fwl Firewall lbl Load Balancer (physical) rtr L3 Router swt L2 Switch vpn VPN Gateway pdu Power Distribution Unit ups Uninterruptible Power Supply ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:2:2","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#special-device-naming"},{"categories":["Development"],"collections":null,"content":"Complete Naming Scheme Examples ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:3:0","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#complete-naming-scheme-examples"},{"categories":["Development"],"collections":null,"content":"Example 1 crimson.example.com. A 192.0.2.11 crimson.lan.example.com. A 10.0.2.11 crimson.oob.example.com. A 10.42.2.11 web01.prd.nyc.example.com. CNAME crimson.example.com.","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:3:1","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#example-1"},{"categories":["Development"],"collections":null,"content":"Example 2 melody.example.com. A 192.0.2.12 melody.lan.example.com. A 10.0.2.12 melody.oob.example.com. A 10.42.2.12 web02.prd.nyc.example.com. CNAME melody.example.com.","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:3:2","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#example-2"},{"categories":["Development"],"collections":null,"content":"Example 3 verona.example.com. A 192.0.2.13 verona.lan.example.com. A 10.0.2.13 verona.oob.example.com. A 10.42.2.13 cfg01.prd.nyc.example.com. CNAME verona.example.com. mon01.prd.nyc.example.com. CNAME verona.example.com. puppet.example.com. CNAME verona.example.com. nagios.example.com. CNAME verona.example.com.","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:3:3","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#example-3"},{"categories":["Development"],"collections":null,"content":"Example 4 benji.example.com. A 192.0.2.104 benji.lan.example.com. A 10.0.2.104 benji.oob.example.com. A 10.42.2.104 web01.dev.pdx.example.com. CNAME benji.example.com. martinlutherkingsr.melblanc.kugupu.stevejob.kenkesey.music.filmhistory.calligraphy.example.com CNAME benji.example.com.","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:3:4","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#example-4"},{"categories":["Development"],"collections":null,"content":"Conclusion A structured hostname naming convention simplifies system management, troubleshooting, and automation. By following this standardized approach, organizations can maintain consistency across their IT infrastructure. ","date":"23-10-2022","objectID":"/posts/development/hostname-naming-conventions/:4:0","tags":["network"],"title":"Hostname Naming Conventions","uri":"/posts/development/hostname-naming-conventions/#conclusion"},{"categories":["Development"],"collections":null,"content":"In React Beautiful DND, you can override the zIndex property when an item is being dragged by adding a style property to the provided.draggableProps object inside the \u0026lt;Draggable\u0026gt; component. Setting the zIndex to 0 in this style will ensure that the dragged item has a lower zIndex value compared to other elements, which can be useful for controlling the stacking order during the drag operation. Here\u0026rsquo;s the code snippet you provided with the relevant line highlighted: \u0026lt;Draggable key={link.id} draggableId={link.id} index={index} \u0026gt; {(provided) =\u0026gt; ( \u0026lt;div ref={provided.innerRef} {...provided.draggableProps} id={`link-${index}`} **style={{ ...provided.draggableProps.style, zIndex: 0 }}** \u0026gt; {/* ... */} \u0026lt;/div\u0026gt; )} \u0026lt;/Draggable\u0026gt; By adding style={{ ...provided.draggableProps.style, zIndex: 0 }}, you\u0026rsquo;re ensuring that the zIndex of the dragged element is set to 0 during the drag operation. You can adjust the zIndex value as needed to control the stacking order relative to other elements on your page. This technique is useful when you want to control the visual appearance of the dragged item and its stacking order during drag and drop operations within your React Beautiful DND application. ","date":"29-09-2022","objectID":"/posts/development/react-beautiful-dnd-override-z-index-when-dragging/:0:0","tags":null,"title":"React Beautiful DND Override Z Index When Dragging","uri":"/posts/development/react-beautiful-dnd-override-z-index-when-dragging/#"},{"categories":["Development"],"collections":null,"content":"When tracking and managing issues, it is important to clearly define their status to ensure efficient resolution. Below are the commonly used issue status descriptions and their meanings: ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:0","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#"},{"categories":["Development"],"collections":null,"content":"New An issue is marked as \u0026ldquo;New\u0026rdquo; when it has been reported but has not yet been assigned to any individual or group for investigation. ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:1","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#new"},{"categories":["Development"],"collections":null,"content":"Assigned An issue moves to the \u0026ldquo;Assigned\u0026rdquo; status when a specific person has been designated to handle it. The assigned individual is listed in the Assignee field. ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:2","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#assigned"},{"categories":["Development"],"collections":null,"content":"Accepted Once the assignee acknowledges the issue and starts working on it, the status changes to \u0026ldquo;Accepted.\u0026rdquo; ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:3","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#accepted"},{"categories":["Development"],"collections":null,"content":"Fixed When an issue has been successfully addressed, it is marked as \u0026ldquo;Fixed.\u0026rdquo; ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:4","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#fixed"},{"categories":["Development"],"collections":null,"content":"Fixed (Verified) After an issue is fixed, it undergoes verification. If the fix is confirmed to be correct, the status updates to \u0026ldquo;Fixed (Verified).\u0026rdquo; ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:5","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#fixed-verified"},{"categories":["Development"],"collections":null,"content":"Won\u0026rsquo;t Fix (Not Reproducible) An issue is marked as \u0026ldquo;Won\u0026rsquo;t Fix (Not Reproducible)\u0026rdquo; if there is insufficient information to resolve it or if the reported problem cannot be recreated. ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:6","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#won"},{"categories":["Development"],"collections":null,"content":"Won\u0026rsquo;t Fix (Intended Behavior) If the reported issue describes an expected behavior of the system, it is labeled as \u0026ldquo;Won\u0026rsquo;t Fix (Intended Behavior).\u0026rdquo; ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:7","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#won-1"},{"categories":["Development"],"collections":null,"content":"Won\u0026rsquo;t Fix (Obsolete) An issue becomes \u0026ldquo;Won\u0026rsquo;t Fix (Obsolete)\u0026rdquo; when it is no longer relevant due to updates or changes in the product. ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:8","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#won-2"},{"categories":["Development"],"collections":null,"content":"Won\u0026rsquo;t Fix (Infeasible) If the required changes to resolve an issue are not practical or possible, the status is set to \u0026ldquo;Won\u0026rsquo;t Fix (Infeasible).\u0026rdquo; ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:9","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#won-3"},{"categories":["Development"],"collections":null,"content":"Duplicate When an issue has already been reported in another ticket, it is marked as \u0026ldquo;Duplicate.\u0026rdquo; Proper documentation should be followed to link the issue to the existing report. By maintaining a standardized issue status system, teams can efficiently manage and resolve reported problems while keeping all stakeholders informed. ","date":"20-09-2022","objectID":"/posts/development/issue-status-description-naming/:0:10","tags":["project management"],"title":"Issue Status Description Naming","uri":"/posts/development/issue-status-description-naming/#duplicate"},{"categories":["Development"],"collections":null,"content":"Spring Data JPA provides a powerful method-naming convention for query generation. By following these conventions, developers can create readable and efficient queries without writing explicit SQL or JPQL. This article explores various query method naming strategies, including equality, similarity, comparison conditions, multiple condition expressions, sorting, and recent changes in CrudRepository. ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:0:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#"},{"categories":["Development"],"collections":null,"content":"1. Equality Condition Keywords Exact equality is a common condition in queries. We have several options to express = or IS operators: Append the property name without a keyword for an exact match: List\u0026lt;User\u0026gt; findByName(String name); Use Is or Equals for readability: List\u0026lt;User\u0026gt; findByNameIs(String name); List\u0026lt;User\u0026gt; findByNameEquals(String name); Express inequality with IsNot: List\u0026lt;User\u0026gt; findByNameIsNot(String name); Spring Data JPA automatically handles null parameters as IS NULL. We can explicitly use IsNull or IsNotNull: List\u0026lt;User\u0026gt; findByNameIsNull(); List\u0026lt;User\u0026gt; findByNameIsNotNull(); For boolean fields, True and False keywords add equality conditions: List\u0026lt;User\u0026gt; findByActiveTrue(); List\u0026lt;User\u0026gt; findByActiveFalse(); ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:1:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#1-equality-condition-keywords"},{"categories":["Development"],"collections":null,"content":"2. Similarity Condition Keywords For pattern-based queries: Match values starting with a prefix: List\u0026lt;User\u0026gt; findByNameStartingWith(String prefix); Match values ending with a suffix: List\u0026lt;User\u0026gt; findByNameEndingWith(String suffix); Match values containing a substring: List\u0026lt;User\u0026gt; findByNameContaining(String infix); For custom patterns, use Like: List\u0026lt;User\u0026gt; findByNameLike(String likePattern); Example usage: String likePattern = \u0026#34;a%b%c\u0026#34;; userRepository.findByNameLike(likePattern); ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:2:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#2-similarity-condition-keywords"},{"categories":["Development"],"collections":null,"content":"3. Comparison Condition Keywords For numeric comparisons: List\u0026lt;User\u0026gt; findByAgeLessThan(Integer age); List\u0026lt;User\u0026gt; findByAgeLessThanEqual(Integer age); List\u0026lt;User\u0026gt; findByAgeGreaterThan(Integer age); List\u0026lt;User\u0026gt; findByAgeGreaterThanEqual(Integer age); To find users between two ages: List\u0026lt;User\u0026gt; findByAgeBetween(Integer startAge, Integer endAge); For collection-based queries: List\u0026lt;User\u0026gt; findByAgeIn(Collection\u0026lt;Integer\u0026gt; ages); For date comparisons: List\u0026lt;User\u0026gt; findByBirthDateAfter(ZonedDateTime birthDate); List\u0026lt;User\u0026gt; findByBirthDateBefore(ZonedDateTime birthDate); ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:3:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#3-comparison-condition-keywords"},{"categories":["Development"],"collections":null,"content":"4. Multiple Condition Expressions We can combine conditions using And and Or: List\u0026lt;User\u0026gt; findByNameOrBirthDate(String name, ZonedDateTime birthDate); List\u0026lt;User\u0026gt; findByNameOrBirthDateAndActive(String name, ZonedDateTime birthDate, Boolean active); And takes precedence over Or, following Java’s operator precedence. For complex queries, consider using the @Query annotation. ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:4:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#4-multiple-condition-expressions"},{"categories":["Development"],"collections":null,"content":"5. Sorting the Results Sorting can be applied with OrderBy: List\u0026lt;User\u0026gt; findByNameOrderByName(String name); List\u0026lt;User\u0026gt; findByNameOrderByNameAsc(String name); For descending order: List\u0026lt;User\u0026gt; findByNameOrderByNameDesc(String name); ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:5:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#5-sorting-the-results"},{"categories":["Development"],"collections":null,"content":"6. findOne vs. findById in CrudRepository Spring Boot 2.x changed findOne to findById: User user = userRepository.findById(1); The findById() method is already defined in CrudRepository, eliminating the need for custom implementations. By following these naming conventions, we can write concise, readable, and efficient queries in Spring Data JPA. ","date":"29-08-2022","objectID":"/posts/development/query-repository-method-naming/:6:0","tags":["java"],"title":"Query Repository Method Naming","uri":"/posts/development/query-repository-method-naming/#6-findone-vs-findbyid-in-crudrepository"},{"categories":["Development"],"collections":null,"content":"User Story Titles A user story defines a behavior or feature that a solution must implement to meet user needs. The recommended formats for user story titles are: As \u0026lt;a persona/type of user\u0026gt;, I want \u0026lt;something\u0026gt; so that \u0026lt;some reason\u0026gt; Example: As Sam Spendsalot, I want one-click purchase so that I can get my goods as quickly as possible. As a \u0026lt;persona/type of user\u0026gt;, I want \u0026lt;something\u0026gt; Example: As a User, I want to create a task. \u0026lt;persona/type of user\u0026gt; \u0026lt;performs action on\u0026gt; \u0026lt;thing\u0026gt; Example: User visits home page OR User creates a task. These formats are based on Microsoft’s MSDN, which credits Mike Cohn at Mountain Goat Software. ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:1:0","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#user-story-titles"},{"categories":["Development"],"collections":null,"content":"Example of Removing a Feature Example: As the product owner of product X, I want feature Y to be removed so that our UI is more streamlined and only provides features that are genuinely useful to our customers. ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:1:1","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#example-of-removing-a-feature"},{"categories":["Development"],"collections":null,"content":"Bug Titles A bug is a defect that impairs a product or service’s functionality. The recommended formats for bug titles are: \u0026lt;person/type of user\u0026gt; can’t \u0026lt;perform action/get result\u0026gt; Example: New User can’t view home screen. When \u0026lt;performing some action/event occurs\u0026gt;, the \u0026lt;system feature\u0026gt; doesn’t work When \u0026lt;persona/type of user\u0026gt; \u0026lt;performs some action\u0026gt;, the \u0026lt;system feature\u0026gt; doesn’t work \u0026lt;system feature\u0026gt; doesn’t work \u0026lt;system feature\u0026gt; should \u0026lt;expected behavior\u0026gt; but doesn’t \u0026lt;system feature\u0026gt; \u0026lt;is not/does not\u0026gt; \u0026lt;expected behavior\u0026gt; \u0026lt;persona/user type\u0026gt; \u0026lt;gets result\u0026gt; but should \u0026lt;get different result\u0026gt; \u0026lt;quick name\u0026gt;. \u0026lt;one of the formats above\u0026gt; Example: \u0026ldquo;Broken button. New User can’t click the Next button on Step 2 of the Wizard.\u0026rdquo; These formats are based on an analysis of close to 5,000 tasks across different organizations, projects, and teams. ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:2:0","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#bug-titles"},{"categories":["Development"],"collections":null,"content":"Task Titles Tasks refer to activities that need to be performed but do not fall into other categories like user stories or bugs. The recommended formats for task titles are: \u0026lt;verb/action\u0026gt; \u0026lt;activity\u0026gt; Example: Perform backup. \u0026lt;verb/action\u0026gt; \u0026lt;thing\u0026gt; Example: Research new JavaScript framework. These formats are derived from the analysis of real-world data. ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:3:0","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#task-titles"},{"categories":["Development"],"collections":null,"content":"New Feature Titles New feature tasks are used mainly for services or components that are somewhat removed from the end user, such as API endpoints. The recommended formats for new feature titles are: Implement \u0026lt;endpoint\u0026gt; Example: Implement POST /api/v1/users. Create endpoint \u0026lt;endpoint\u0026gt; Example: Create endpoint POST /api/v1/users. ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:4:0","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#new-feature-titles"},{"categories":["Development"],"collections":null,"content":"Improvement Titles Improvement tasks involve minor modifications to existing functionality. The recommended formats for improvement titles are: \u0026lt;endpoint\u0026gt; \u0026gt; also \u0026lt;additional functionality\u0026gt; Example: POST /api/v1/users \u0026gt; also accept date of birth. \u0026lt;component\u0026gt; \u0026gt; also \u0026lt;additional functionality\u0026gt; Make \u0026lt;feature\u0026gt; run faster Improve the performance of \u0026lt;feature/screen/endpoint\u0026gt; Update \u0026lt;feature\u0026gt; \u0026lt;with/to\u0026gt; \u0026lt;update\u0026gt; Rename \u0026lt;feature/text\u0026gt; to \u0026lt;new name\u0026gt; ","date":"17-08-2022","objectID":"/posts/development/issue-title-naming/:5:0","tags":["project management"],"title":"Issue Title Naming","uri":"/posts/development/issue-title-naming/#improvement-titles"},{"categories":["Development"],"collections":null,"content":"SQLite is a popular, lightweight, and serverless database engine used in various applications. However, when working with SQLite on a Windows Subsystem for Linux (WSL2) in Windows 11, you may encounter issues related to file locking. In this article, we will explore the problem, its possible cause, and a solution to resolve it. ","date":"13-07-2022","objectID":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/:0:0","tags":null,"title":"Can't Open SQLite on WSL2 Windows 11","uri":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/#"},{"categories":["Development"],"collections":null,"content":"Problems The primary issue you might face when trying to open an SQLite file located on WSL2 is that the database file is locked. This can prevent you from performing read and write operations on the database, making it challenging to work with SQLite effectively. ","date":"13-07-2022","objectID":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/:1:0","tags":null,"title":"Can't Open SQLite on WSL2 Windows 11","uri":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/#problems"},{"categories":["Development"],"collections":null,"content":"Possible Cause The locking issue often arises due to the difference in file systems and file path conventions between Windows and Linux. When you access files within WSL2, they are typically located under the \\\\wsl$ path, such as \\\\wsl$\\Ubuntu. This path convention is specific to WSL and can lead to file locking issues when trying to access the same file from both Windows and Linux. ","date":"13-07-2022","objectID":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/:2:0","tags":null,"title":"Can't Open SQLite on WSL2 Windows 11","uri":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/#possible-cause"},{"categories":["Development"],"collections":null,"content":"Solution To resolve the SQLite file locking issue on WSL2 in Windows 11, you can follow these steps: Move the Database File to a Windows Path: To make the database file accessible without locking issues from both Windows and WSL2, it\u0026rsquo;s recommended to move the file to a Windows file system path. You can place it in a directory accessible from Windows, such as your user folder or a shared drive. Create a Symbolic Link from WSL2 to Windows Path: After moving the database file to a Windows path, you can create a symbolic link from WSL2 to this Windows path. This link allows you to access the file seamlessly from WSL2 without locking issues. Example command to create a symbolic link: ln -s /mnt/c/Users/me/Desktop/foo.db ./foo.db In this example, replace /mnt/c/Users/me/Desktop/foo.db with the actual path to your SQLite database file on the Windows side, and ./foo.db with the desired location and name for the symbolic link within your WSL2 environment. By following these steps, you\u0026rsquo;ll ensure that your SQLite database file is accessible without locking issues both from Windows and WSL2. This allows you to work with SQLite seamlessly on your Windows 11 machine using the power of WSL2 and the convenience of Windows file paths. ","date":"13-07-2022","objectID":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/:3:0","tags":null,"title":"Can't Open SQLite on WSL2 Windows 11","uri":"/posts/development/cant-open-sqlite-on-wsl2-windows-11/#solution"},{"categories":["Development"],"collections":null,"content":"If you want to switch virtual desktops with mouse gestures on Windows 11, you can use a third-party software called StrokeIt. StrokeIt is a mouse gesture recognition program that allows you to perform various actions by drawing gestures with your mouse. Here\u0026rsquo;s how you can set up StrokeIt to switch virtual desktops on Windows 11: Download and Install StrokeIt: You can download StrokeIt from its official website here. Follow the installation instructions to set it up on your computer. Launch StrokeIt: Once installed, open StrokeIt by double-clicking its icon in the system tray or by searching for it in the Windows Start menu. Configure StrokeIt: StrokeIt provides a variety of predefined gestures, but you can create custom gestures for specific actions. To set up a gesture to switch virtual desktops: Right-click on the StrokeIt icon in the system tray and select \u0026ldquo;Options.\u0026rdquo; In the StrokeIt Configuration window, go to the \u0026ldquo;Actions\u0026rdquo; tab. Create a Gesture: Click the \u0026ldquo;Add\u0026rdquo; button to create a new gesture. In the \u0026ldquo;Action Name\u0026rdquo; field, enter a name for the gesture (e.g., \u0026ldquo;Switch Virtual Desktop\u0026rdquo;). In the \u0026ldquo;Command\u0026rdquo; field, type the keyboard shortcut for switching virtual desktops in Windows 11. The default shortcut is Windows key + Ctrl + Left arrow to move to the previous desktop and Windows key + Ctrl + Right arrow to move to the next desktop. Record a Gesture: Click the \u0026ldquo;Record\u0026rdquo; button. Draw a mouse gesture in the provided area. For example, you can draw a right arrow for switching to the next desktop and a left arrow for switching to the previous one. Click \u0026ldquo;Stop\u0026rdquo; when you\u0026rsquo;re done drawing the gesture. Assign the Gesture: With the gesture recorded, select it from the list in the StrokeIt Configuration window. In the \u0026ldquo;Action\u0026rdquo; dropdown menu, choose the action you just created (e.g., \u0026ldquo;Switch Virtual Desktop\u0026rdquo;). Save and Apply: Click \u0026ldquo;Apply\u0026rdquo; and then \u0026ldquo;OK\u0026rdquo; to save your settings. Now, you should be able to switch virtual desktops in Windows 11 by performing the mouse gesture you created using StrokeIt. For example, if you drew a right arrow, StrokeIt should trigger the keyboard shortcut to switch to the next virtual desktop. Please note that third-party software like StrokeIt may not always be as integrated or stable as native Windows features, so you may want to use it with caution and consider other options if you encounter any issues. ","date":"30-06-2022","objectID":"/posts/development/switch-virtual-desktop-with-mouse-gesture-on-windows-11/:0:0","tags":null,"title":"Switch Virtual Desktop With Mouse Gesture on Windows 11","uri":"/posts/development/switch-virtual-desktop-with-mouse-gesture-on-windows-11/#"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with your Mi Band recording incorrect \u0026ldquo;In Bed\u0026rdquo; data on the Sleep variable in the Health app on iOS, there\u0026rsquo;s a simple solution you can try. This problem can often be resolved by adjusting your iOS device\u0026rsquo;s timezone settings. Here\u0026rsquo;s a step-by-step guide on how to do it: ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:0:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Open the Settings App on Your iOS Device Unlock your iOS device and locate the \u0026ldquo;Settings\u0026rdquo; app on your home screen. It looks like a gear icon and is typically found on the first page of your apps. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:1:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-1-open-the-settings-app-on-your-ios-device"},{"categories":["Software"],"collections":null,"content":"Step 2: Scroll Down and Tap on \u0026ldquo;General\u0026rdquo; In the Settings app, scroll down until you find the \u0026ldquo;General\u0026rdquo; option. It\u0026rsquo;s usually located below \u0026ldquo;Display \u0026amp; Brightness\u0026rdquo; and above \u0026ldquo;Privacy.\u0026rdquo; ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:2:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-2-scroll-down-and-tap-on-general"},{"categories":["Software"],"collections":null,"content":"Step 3: Tap on \u0026ldquo;Date \u0026amp; Time\u0026rdquo; Inside the \u0026ldquo;General\u0026rdquo; settings, you\u0026rsquo;ll find \u0026ldquo;Date \u0026amp; Time.\u0026rdquo; Tap on it to access your date and time settings. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:3:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-3-tap-on-date--time"},{"categories":["Software"],"collections":null,"content":"Step 4: Disable \u0026ldquo;Set Automatically\u0026rdquo; By default, your iOS device is set to update the date and time automatically based on your location. To change this, toggle off the \u0026ldquo;Set Automatically\u0026rdquo; switch. This will allow you to manually set the timezone. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:4:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-4-disable-set-automatically"},{"categories":["Software"],"collections":null,"content":"Step 5: Manually Set the Timezone Once \u0026ldquo;Set Automatically\u0026rdquo; is turned off, you can manually set your timezone. Scroll down to find the \u0026ldquo;Time Zone\u0026rdquo; option and tap on it. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:5:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-5-manually-set-the-timezone"},{"categories":["Software"],"collections":null,"content":"Step 6: Choose the Correct Timezone In the \u0026ldquo;Time Zone\u0026rdquo; settings, you\u0026rsquo;ll see a list of timezones. Scroll through the list and select the timezone that corresponds to your actual location. Make sure it\u0026rsquo;s the correct timezone for where you are, as this will affect how your Mi Band records your sleep data. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:6:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-6-choose-the-correct-timezone"},{"categories":["Software"],"collections":null,"content":"Step 7: Return to the Health App After setting the correct timezone, exit the Settings app and open the Health app. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:7:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-7-return-to-the-health-app"},{"categories":["Software"],"collections":null,"content":"Step 8: Check Sleep Data Now, check your sleep data in the Health app and see if the issue with incorrect \u0026ldquo;In Bed\u0026rdquo; data has been resolved. Your Mi Band should now record sleep data more accurately. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:8:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-8-check-sleep-data"},{"categories":["Software"],"collections":null,"content":"Step 9: Restore Automatic Timezone Setting (Optional) If the problem is fixed and your Mi Band is recording sleep data correctly, you can choose to enable the \u0026ldquo;Set Automatically\u0026rdquo; option in the Date \u0026amp; Time settings again. This will revert your timezone settings to automatic updates based on your location. By following these steps, you should be able to address the issue of incorrect \u0026ldquo;In Bed\u0026rdquo; data recorded by your Mi Band on the Health app for iOS. Adjusting the timezone settings can help ensure that your sleep data is accurate and reflects your actual sleep patterns. ","date":"20-06-2022","objectID":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/:9:0","tags":["ios"],"title":"Mi Band Record Incorrect In Bed Data on Sleep Variable of Health iOS","uri":"/posts/software/mi-band-record-incorrect-in-bed-data-on-sleep-variable-of-health-ios/#step-9-restore-automatic-timezone-setting-optional"},{"categories":["Development"],"collections":null,"content":"If you want to access a shared folder on Windows 11 using your Windows account password, follow these steps: Open Settings: Click on the Start button (Windows logo) in the taskbar. Click on the Settings (gear-shaped) icon to open the Windows Settings app. Navigate to Accounts: In the Windows Settings app, click on the Accounts category. Change Sign-in Options: Under the Accounts section, click on Sign-in options in the left sidebar. Disable Windows Hello (Recommended): Look for the option that says \u0026ldquo;For improved security, only allow Windows Hello sign-in for Microsoft accounts on this device (Recommended).\u0026rdquo; If this option is enabled, click on the switch to turn it off. You may be prompted to enter your current password to make this change. Access the Shared Folder: After disabling Windows Hello, you can now access the shared folder. Open File Explorer by pressing Win + E or by clicking on the folder icon in the taskbar. Connect to the Shared Folder: In File Explorer, click on the Network tab on the left sidebar. This will display a list of available network devices and shared folders. Locate the shared folder you want to access. It should appear under the list of network devices or under \u0026ldquo;Network Locations.\u0026rdquo; Login with Your Windows Account Password: Double-click on the shared folder you want to access. If prompted for credentials, enter your Windows account username and password. Make sure to specify your computer\u0026rsquo;s name or IP address as the domain if required (e.g., COMPUTERNAME\\YourUsername or IP_Address\\YourUsername). If you want Windows to remember your credentials for future access, you can check the \u0026ldquo;Remember my credentials\u0026rdquo; or \u0026ldquo;Remember my password\u0026rdquo; option depending on the version of Windows 11. Access the Shared Folder: Once you\u0026rsquo;ve successfully entered your credentials, you should be able to access the shared folder and its contents. That\u0026rsquo;s it! You\u0026rsquo;ve now logged into a shared folder on Windows 11 using your Windows account password. Remember to keep your Windows account password secure, as it grants access to your computer and shared resources on the network. ","date":"15-06-2022","objectID":"/posts/development/how-to-login-to-a-shared-folder-with-your-windows-account-on-windows-11/:0:0","tags":null,"title":"How to Login to a Shared Folder with Your Windows Account on Windows 11","uri":"/posts/development/how-to-login-to-a-shared-folder-with-your-windows-account-on-windows-11/#"},{"categories":["Development"],"collections":null,"content":"If you want to see the currently logged-in users on a Windows 10 or Windows 11 system, you can use the Command Prompt and the query user command. Here\u0026rsquo;s a step-by-step guide on how to do it: Open Command Prompt: Press Win + X and select \u0026ldquo;Windows Terminal\u0026rdquo; or \u0026ldquo;Command Prompt\u0026rdquo; from the menu. You can also simply search for \u0026ldquo;Command Prompt\u0026rdquo; in the Windows search bar and open it. Run query user Command: In the Command Prompt window, type the following command and press Enter: query userThis command will display a list of all currently logged-in users on your Windows system. Review the List: After running the command, you will see a table with information about the logged-in users. The table typically includes details such as the username, session name, ID, and state. Here\u0026rsquo;s an example of what the output might look like: USERNAME SESSIONNAME ID STATE IDLE TIME LOGON TIME johndoe console 1 Active . 8/30/2023 1:23 PM jane.smith rdp-tcp#0 2 Active . 8/30/2023 9:45 AMIn this example, there are two logged-in users: \u0026ldquo;johndoe\u0026rdquo; and \u0026ldquo;jane.smith,\u0026rdquo; each with their session information. That\u0026rsquo;s it! You\u0026rsquo;ve successfully used the query user command to see the currently logged-in users on your Windows 10 or Windows 11 computer. For more detailed information and screenshots, you can refer to the source article. ","date":"17-05-2022","objectID":"/posts/development/how-to-see-currently-logged-in-users-in-windows-10-11/:0:0","tags":null,"title":"How to See Currently Logged in Users in Windows 10 - 11","uri":"/posts/development/how-to-see-currently-logged-in-users-in-windows-10-11/#"},{"categories":["Development"],"collections":null,"content":"In this article, we will walk through the creation of an optimized Dockerfile for building and running a Go application. This Dockerfile will focus on building only the binary of the Go application, resulting in a smaller and more efficient Docker image. ## Dockerfile for Building a Go Application ```Dockerfile # Stage 1: Build the Go Binary FROM golang:1.17 as builder # Set the working directory inside the container WORKDIR /app # Copy the Go application source code into the container COPY . . # Set environment variables ENV CGO_ENABLED=0 # Fetch dependencies and build the Go binary RUN go get -d -v ./... RUN go build -o /tmp/api-server . # Stage 2: Create a minimal runtime image FROM scratch # Copy the binary from the builder stage into the minimal image COPY --from=builder /tmp/api-server /usr/bin/api-server # Define the command to run when the container starts CMD [\u0026#34;api-server\u0026#34;, \u0026#34;start\u0026#34;] In this Dockerfile, we use a multi-stage build to optimize the final Docker image. Here\u0026rsquo;s a breakdown of what each section does: Stage 1 (builder): We start from the official Golang image, specifically version 1.17. Set the working directory inside the container to /app. Copy the Go application source code from your local directory into the container. Set the CGO_ENABLED environment variable to 0 to build a statically linked binary. Fetch the project\u0026rsquo;s dependencies using go get. Build the Go binary and place it in /tmp/api-server. Stage 2 (minimal runtime image): We use the scratch image as the base image. This image is essentially empty, which makes our final image very small and lightweight. Copy the binary (api-server) from the builder stage into /usr/bin/api-server in the final image. Define the default command to execute when the container starts, which is [\u0026quot;api-server\u0026quot;, \u0026quot;start\u0026quot;]. This Dockerfile optimizes the final Docker image size by separating the build environment from the runtime environment. It results in a minimal Docker image that contains only the Go binary and its necessary dependencies, making it efficient and suitable for production deployment. For more details on using this Dockerfile and building a Go application with Docker, you can refer to the original article. Remember to replace the COPY . . line in the Dockerfile with the appropriate path to your Go application source code, as specified in your project structure. ","date":"10-05-2022","objectID":"/posts/development/building-a-go-application-with-docker-optimized-dockerfile/:0:0","tags":null,"title":"Building a Go Application with Docker: Optimized Dockerfile","uri":"/posts/development/building-a-go-application-with-docker-optimized-dockerfile/#"},{"categories":["Development"],"collections":null,"content":"In a React application, it\u0026rsquo;s common to want to scroll to the top of the page instantly after a router transition, such as when navigating to a new page or route. This provides a smooth user experience and ensures that the user starts reading the new content from the top of the page. To achieve this, you can use JavaScript\u0026rsquo;s window.scrollTo() method with the behavior: 'instant' option. In this article, we\u0026rsquo;ll explore how to implement this behavior in a React application. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:0:0","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you proceed, make sure you have a React application set up with React Router. If you haven\u0026rsquo;t already, you can create a new React application using Create React App or any other preferred method. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:1:0","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Implementation To scroll to the top of the page instantly after a router transition, you\u0026rsquo;ll need to perform the following steps: Import the necessary modules. Create a custom scroll-to-top component. Use this component in your application to trigger scrolling when the route changes. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:2:0","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#implementation"},{"categories":["Development"],"collections":null,"content":"Step 1: Import the necessary modules First, import the required modules for React Router and React itself. import React, { useEffect } from \u0026#39;react\u0026#39;; import { useLocation } from \u0026#39;react-router-dom\u0026#39;; ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:2:1","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#step-1-import-the-necessary-modules"},{"categories":["Development"],"collections":null,"content":"Step 2: Create a custom scroll-to-top component Next, create a custom component called ScrollToTop. This component will listen for route changes using the useLocation hook and scroll to the top of the page when the route changes. function ScrollToTop() { const { pathname } = useLocation(); useEffect(() =\u0026gt; { // Scroll to the top of the page with instant behavior window.scrollTo({ top: 0, left: 0, behavior: \u0026#39;instant\u0026#39; }); }, [pathname]); return null; } In this component: We use the useLocation hook from React Router to get the current location (i.e., the current route\u0026rsquo;s pathname). We use the useEffect hook to watch for changes in the pathname variable. When the pathname changes (i.e., when the route changes), we use window.scrollTo() to scroll to the top of the page instantly. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:2:2","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#step-2-create-a-custom-scroll-to-top-component"},{"categories":["Development"],"collections":null,"content":"Step 3: Use the ScrollToTop component Now that you have the ScrollToTop component, you need to include it in your application. Place it at the top level of your routing configuration to ensure it works for all routes. Here\u0026rsquo;s an example of how to use it with React Router: import { BrowserRouter as Router, Route, Switch } from \u0026#39;react-router-dom\u0026#39;; function App() { return ( \u0026lt;Router\u0026gt; {/* Include the ScrollToTop component */} \u0026lt;ScrollToTop /\u0026gt; \u0026lt;Switch\u0026gt; \u0026lt;Route exact path=\u0026#34;/\u0026#34; component={Home} /\u0026gt; \u0026lt;Route path=\u0026#34;/about\u0026#34; component={About} /\u0026gt; \u0026lt;Route path=\u0026#34;/contact\u0026#34; component={Contact} /\u0026gt; {/* Add more routes as needed */} \u0026lt;/Switch\u0026gt; \u0026lt;/Router\u0026gt; ); } export default App; By including the ScrollToTop component at the top level of your routes, it will automatically scroll to the top of the page whenever the route changes. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:2:3","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#step-3-use-the-scrolltotop-component"},{"categories":["Development"],"collections":null,"content":"Conclusion In this article, you\u0026rsquo;ve learned how to implement smooth scrolling to the top of the page instantly after a router transition in a React application. This enhances the user experience and ensures that users start reading new content from the top of the page when navigating between routes. ","date":"30-04-2022","objectID":"/posts/development/react-scroll-to-top-instantly-after-router-transition/:3:0","tags":null,"title":"React: Scroll to Top Instantly After Router Transition","uri":"/posts/development/react-scroll-to-top-instantly-after-router-transition/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Git, there may be situations where you need to update or correct author information on all commits, such as changing email addresses or fixing incorrect names. This guide will explain how to rename authors on all commits using the git-filter-repo tool. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:0:0","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#"},{"categories":["Development"],"collections":null,"content":"Requirements git-filter-repo: You can install it from the official repository at https://github.com/newren/git-filter-repo. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:1:0","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#requirements"},{"categories":["Development"],"collections":null,"content":"Procedure ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:2:0","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#procedure"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a .mailmap File Create a file named .mailmap in the root directory of your Git repository. This file will map the old author information to the new author information. The correct format for each entry is as follows: Target Name \u0026lt;target@example.com\u0026gt; Original Name \u0026lt;origin@example.com\u0026gt;Replace \u0026ldquo;Target Name\u0026rdquo; with the desired new author\u0026rsquo;s name and \u0026ldquo;target@example.com\u0026rdquo; with their new email address. Similarly, replace \u0026ldquo;Original Name\u0026rdquo; with the original author\u0026rsquo;s name and \u0026ldquo;origin@example.com\u0026rdquo; with their original email address. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:2:1","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#step-1-create-a-mailmap-file"},{"categories":["Development"],"collections":null,"content":"Step 2: Run the git-filter-repo Command Execute the following command in your terminal: git-filter-repo --mailmap .mailmap This command applies the changes specified in the .mailmap file to all commits in your Git repository, effectively renaming the authors. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:2:2","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#step-2-run-the-git-filter-repo-command"},{"categories":["Development"],"collections":null,"content":"Step 3: Add a Remote URL (Optional) If your repository doesn\u0026rsquo;t have a remote URL set, you can add it using the following command: git remote add origin [REMOTE_URL] Replace [REMOTE_URL] with the URL of your remote repository. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:2:3","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#step-3-add-a-remote-url-optional"},{"categories":["Development"],"collections":null,"content":"Conclusion By following these steps, you can successfully rename author information on all commits in your Git repository using the git-filter-repo tool. This method ensures consistency and accuracy of authorship information across your commits. Note: It\u0026rsquo;s always recommended to back up your repository before making any significant changes like author renaming. Additionally, it\u0026rsquo;s crucial to communicate and collaborate with your team members when making such modifications to maintain transparency and avoid any unintended consequences. ","date":"22-04-2022","objectID":"/posts/development/how-to-rename-author-on-all-commits-in-git/:3:0","tags":["git"],"title":"How to Rename Author on All Commits in Git","uri":"/posts/development/how-to-rename-author-on-all-commits-in-git/#conclusion"},{"categories":["Development"],"collections":null,"content":"Introduction A well-structured commit message is crucial for maintaining a clean and readable project history. The Conventional Commits specification provides a standard format for commit messages, making it easier to generate changelogs, automate versioning, and improve collaboration. This document outlines the naming conventions and structure for Git commit messages. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:1:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#introduction"},{"categories":["Development"],"collections":null,"content":"Commit Message Format A commit message should follow this format: type(scope)!: short description longer commit body (optional) BREAKING CHANGE: description (optional)","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-message-format"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#examples"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-a-description-and-a-breaking-change-footer"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with--indicating-a-breaking-change"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-a-scope-and--for-a-breaking-change"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-both--and-breaking-change-footer"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-no-body"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-a-scope"},{"categories":["Development"],"collections":null,"content":"Examples Commit with a description and a breaking change footer feat: allow provided config object to extend other configs BREAKING CHANGE: `extends` key in config file is now used for extending other config filesCommit with ! indicating a breaking change feat!: send an email to the customer when a product is shippedCommit with a scope and ! for a breaking change feat(api)!: send an email to the customer when a product is shippedCommit with both ! and BREAKING CHANGE footer chore!: drop support for Node 6 BREAKING CHANGE: use JavaScript features not available in Node 6.Commit with no body docs: correct spelling of CHANGELOGCommit with a scope feat(lang): add Polish languageCommit with a multi-paragraph body and multiple footers fix: prevent racing of requests Introduce a request id and a reference to latest request. Dismiss incoming responses other than from the latest request. Remove timeouts which were used to mitigate the racing issue but are obsolete now. Reviewed-by: Z Refs: #123","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:2:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#commit-with-a-multi-paragraph-body-and-multiple-footers"},{"categories":["Development"],"collections":null,"content":"Specification The following guidelines apply: Type Prefix: Commits MUST be prefixed with a type (feat, fix, etc.), an OPTIONAL scope, an OPTIONAL !, and a REQUIRED colon followed by a space. Feature Commits: The feat type MUST be used for new features. Bug Fix Commits: The fix type MUST be used for bug fixes. Scope: A scope MAY be provided in parentheses, describing the code section affected. Description: A short summary MUST follow the prefix. Body: A longer commit body MAY be included, starting one blank line after the description. Footers: Footers MAY be included one blank line after the body, using key-value pairs. Breaking Changes: Breaking changes MUST be indicated with ! in the type or in a BREAKING CHANGE footer. Consistency: Types and commit structures MUST be used consistently. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:3:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#specification"},{"categories":["Development"],"collections":null,"content":"Benefits of Conventional Commits Automated Changelogs: Enables automatic changelog generation. Semantic Versioning: Helps determine version bumps (MAJOR, MINOR, PATCH). Better Communication: Improves clarity for contributors and maintainers. Automated Processes: Can trigger build and release pipelines. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:4:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#benefits-of-conventional-commits"},{"categories":["Development"],"collections":null,"content":"FAQ ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#faq"},{"categories":["Development"],"collections":null,"content":"How should commits be handled during early development? Treat commits as if the product is already released to ensure consistency and clarity. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:1","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#how-should-commits-be-handled-during-early-development"},{"categories":["Development"],"collections":null,"content":"What if a commit conforms to multiple types? Make multiple commits whenever possible to maintain clarity. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:2","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#what-if-a-commit-conforms-to-multiple-types"},{"categories":["Development"],"collections":null,"content":"How does this relate to SemVer? fix commits correspond to PATCH releases. feat commits correspond to MINOR releases. BREAKING CHANGE commits correspond to MAJOR releases. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:3","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#how-does-this-relate-to-semver"},{"categories":["Development"],"collections":null,"content":"What if I use the wrong commit type? Before merging, use git rebase -i to edit the commit history. After release, follow your project\u0026rsquo;s revision process. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:4","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#what-if-i-use-the-wrong-commit-type"},{"categories":["Development"],"collections":null,"content":"Should all contributors follow this standard? Not necessarily. A lead maintainer can clean up commit messages before merging. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:5","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#should-all-contributors-follow-this-standard"},{"categories":["Development"],"collections":null,"content":"How are revert commits handled? Use the revert type and reference the commit SHAs being reverted: revert: let us never again speak of the noodle incident Refs: 676104e, a215868","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:5:6","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#how-are-revert-commits-handled"},{"categories":["Development"],"collections":null,"content":"Conclusion Following the Conventional Commits specification ensures a structured commit history, aiding in automation, collaboration, and maintainability. Adopting this practice leads to more organized and meaningful commit logs, benefiting both developers and project maintainers. ","date":"01-04-2022","objectID":"/posts/development/git-commit-message-naming/:6:0","tags":["git"],"title":"GIT Commit Message Naming","uri":"/posts/development/git-commit-message-naming/#conclusion"},{"categories":["Development"],"collections":null,"content":"GitLab Flow is a set of best practices for using Git with GitLab, combining elements of Git Flow and GitHub Flow to support various workflows. It emphasizes the use of feature branches for development, with integration to GitLab’s CI/CD pipeline for continuous testing and deployment. ","date":"28-03-2022","objectID":"/posts/development/git-gitlab-flow/:0:0","tags":["git","gitlab"],"title":"Git Gitlab Flow","uri":"/posts/development/git-gitlab-flow/#"},{"categories":["Development"],"collections":null,"content":"Production Branch flowchart TB a--\u003eb--\u003ec--\u003ed--\u003ee f--\u003eg c--deployment--\u003eg g--\u003eh a[development] b[development] c[development] d[development] e[development] f[production] g[production] h[production] flowchart TB a--\u003eb--\u003ec--\u003ed--\u003ee f--\u003eg c--deployment--\u003eg g--\u003eh a[development] b[development] c[development] d[development] e[development] f[production] g[production] h[production] flowchart TB a--\u003eb--\u003ec--\u003ed--\u003ee f--\u003eg c--deployment--\u003eg g--\u003eh a[development] b[development] c[development] d[development] e[development] f[production] g[production] h[production] flowchart TB a--\u003eb--\u003ec--\u003ed--\u003ee f--\u003eg c--deployment--\u003eg g--\u003eh a[development] b[development] c[development] d[development] e[development] f[production] g[production] h[production] ","date":"28-03-2022","objectID":"/posts/development/git-gitlab-flow/:1:0","tags":["git","gitlab"],"title":"Git Gitlab Flow","uri":"/posts/development/git-gitlab-flow/#production-branch"},{"categories":["Development"],"collections":null,"content":"Environtment Branches flowchart LR a--\u003eb--\u003ec--\u003ed e--\u003ef--\u003eg--\u003eh i--\u003ej--\u003ek a--deploy to\\npre-prod--\u003ee c--deploy to\\npre-prod--\u003eg e--production\\ndeployment--\u003ej a[staging] b[staging] c[staging] d[staging] e[pre-prod] f[pre-prod] g[pre-prod] h[pre-prod] i[production] j[production] k[production] flowchart LR a--\u003eb--\u003ec--\u003ed e--\u003ef--\u003eg--\u003eh i--\u003ej--\u003ek a--deploy to\\npre-prod--\u003ee c--deploy to\\npre-prod--\u003eg e--production\\ndeployment--\u003ej a[staging] b[staging] c[staging] d[staging] e[pre-prod] f[pre-prod] g[pre-prod] h[pre-prod] i[production] j[production] k[production] flowchart LR a--\u003eb--\u003ec--\u003ed e--\u003ef--\u003eg--\u003eh i--\u003ej--\u003ek a--deploy to\\npre-prod--\u003ee c--deploy to\\npre-prod--\u003eg e--production\\ndeployment--\u003ej a[staging] b[staging] c[staging] d[staging] e[pre-prod] f[pre-prod] g[pre-prod] h[pre-prod] i[production] j[production] k[production] flowchart LR a--\u003eb--\u003ec--\u003ed e--\u003ef--\u003eg--\u003eh i--\u003ej--\u003ek a--deploy to\\npre-prod--\u003ee c--deploy to\\npre-prod--\u003eg e--production\\ndeployment--\u003ej a[staging] b[staging] c[staging] d[staging] e[pre-prod] f[pre-prod] g[pre-prod] h[pre-prod] i[production] j[production] k[production] ","date":"28-03-2022","objectID":"/posts/development/git-gitlab-flow/:2:0","tags":["git","gitlab"],"title":"Git Gitlab Flow","uri":"/posts/development/git-gitlab-flow/#environtment-branches"},{"categories":["Development"],"collections":null,"content":"Release Branches flowchart LR a--\u003eb--\u003ec--\u003ed--\u003ee a--\u003ef--\u003eg c-.-chery-pick-.-\u003eg d--\u003ei a[main] b[main] c[main] d[main] e[main] f[2.3-stable] g[2.3-stable] i[2.4-stable] flowchart LR a--\u003eb--\u003ec--\u003ed--\u003ee a--\u003ef--\u003eg c-.-chery-pick-.-\u003eg d--\u003ei a[main] b[main] c[main] d[main] e[main] f[2.3-stable] g[2.3-stable] i[2.4-stable] flowchart LR a--\u003eb--\u003ec--\u003ed--\u003ee a--\u003ef--\u003eg c-.-chery-pick-.-\u003eg d--\u003ei a[main] b[main] c[main] d[main] e[main] f[2.3-stable] g[2.3-stable] i[2.4-stable] flowchart LR a--\u003eb--\u003ec--\u003ed--\u003ee a--\u003ef--\u003eg c-.-chery-pick-.-\u003eg d--\u003ei a[main] b[main] c[main] d[main] e[main] f[2.3-stable] g[2.3-stable] i[2.4-stable] ","date":"28-03-2022","objectID":"/posts/development/git-gitlab-flow/:3:0","tags":["git","gitlab"],"title":"Git Gitlab Flow","uri":"/posts/development/git-gitlab-flow/#release-branches"},{"categories":["Development"],"collections":null,"content":"Business Projects Business projects typically include the following issue types: ","date":"28-03-2022","objectID":"/posts/development/issue-type/:1:0","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#business-projects"},{"categories":["Development"],"collections":null,"content":"Task A task represents a unit of work that needs to be completed within a business project. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:1:1","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#task"},{"categories":["Development"],"collections":null,"content":"Subtask A subtask is a smaller piece of work required to complete a task. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:1:2","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#subtask"},{"categories":["Development"],"collections":null,"content":"Software Projects Software projects include various issue types to manage development work efficiently: ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:0","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#software-projects"},{"categories":["Development"],"collections":null,"content":"Epic An epic is a large user story that needs to be broken down. It groups together bugs, stories, and tasks to track the progress of a larger initiative. In agile development, epics usually represent significant deliverables, such as new features or experiences in the software being developed. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:1","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#epic"},{"categories":["Development"],"collections":null,"content":"Bug A bug is a defect that impairs or prevents the proper function of a product. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:2","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#bug"},{"categories":["Development"],"collections":null,"content":"Story A user story represents the smallest unit of work that needs to be completed, often written from the perspective of an end user. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:3","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#story"},{"categories":["Development"],"collections":null,"content":"Task A task represents a specific piece of work that must be done. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:4","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#task-1"},{"categories":["Development"],"collections":null,"content":"Subtask A subtask is a smaller unit of work required to complete a task. Subtasks can be used to break down any standard issue types, including bugs, stories, or tasks. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:2:5","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#subtask-1"},{"categories":["Development"],"collections":null,"content":"Service Projects Service projects include various issue types for managing IT and customer service requests: ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:0","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#service-projects"},{"categories":["Development"],"collections":null,"content":"Change A request for a change in the current IT profile. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:1","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#change"},{"categories":["Development"],"collections":null,"content":"IT Help A request for assistance with an IT-related problem. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:2","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#it-help"},{"categories":["Development"],"collections":null,"content":"Incident A report of an IT service outage or incident. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:3","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#incident"},{"categories":["Development"],"collections":null,"content":"New Feature A request for a new capability or software feature. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:4","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#new-feature"},{"categories":["Development"],"collections":null,"content":"Problem An issue type used to investigate and report the root cause of multiple incidents. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:5","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#problem"},{"categories":["Development"],"collections":null,"content":"Service Request A request for help from an internal or customer service team. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:6","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#service-request"},{"categories":["Development"],"collections":null,"content":"Service Request with Approval A service request that requires approval from a manager or board before proceeding. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:7","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#service-request-with-approval"},{"categories":["Development"],"collections":null,"content":"Support A request for assistance with customer support issues. By categorizing issues effectively, businesses, software teams, and service organizations can manage their workflows more efficiently and ensure smooth project execution. ","date":"28-03-2022","objectID":"/posts/development/issue-type/:3:8","tags":["project management"],"title":"Issue Type","uri":"/posts/development/issue-type/#support"},{"categories":["Development"],"collections":null,"content":"Effective user role naming is essential for managing access control and permissions in systems, applications, and organizations. Well-structured role names enhance clarity, consistency, scalability, and security, ensuring that responsibilities and access levels are accurately represented. ","date":"27-03-2022","objectID":"/posts/development/user-roles-naming/:0:0","tags":[],"title":"User Roles Naming","uri":"/posts/development/user-roles-naming/#"},{"categories":["Development"],"collections":null,"content":"Key Principles Clarity – Role names should be self-explanatory and easily understood by all stakeholders. Consistency – Maintain uniform naming conventions across all roles to prevent confusion. Scalability – Design role names that accommodate future expansions or modifications. Security – Ensure that role names reflect the appropriate access levels without exposing sensitive information. ","date":"27-03-2022","objectID":"/posts/development/user-roles-naming/:1:0","tags":[],"title":"User Roles Naming","uri":"/posts/development/user-roles-naming/#key-principles"},{"categories":["Development"],"collections":null,"content":"Example Role Hierarchy flowchart LR a[Site Visitor] --\u003e b[Former Member] a --\u003e c[Registered Member] a --\u003e d[Known Visitor] a --\u003e e[Unknown Visitor] c --\u003e f[Premium Member] c --\u003e g[Trial Member] flowchart LR a[Site Visitor] --\u003e b[Former Member] a --\u003e c[Registered Member] a --\u003e d[Known Visitor] a --\u003e e[Unknown Visitor] c --\u003e f[Premium Member] c --\u003e g[Trial Member] flowchart LR a[Site Visitor] --\u003e b[Former Member] a --\u003e c[Registered Member] a --\u003e d[Known Visitor] a --\u003e e[Unknown Visitor] c --\u003e f[Premium Member] c --\u003e g[Trial Member] flowchart LR a[Site Visitor] --\u003e b[Former Member] a --\u003e c[Registered Member] a --\u003e d[Known Visitor] a --\u003e e[Unknown Visitor] c --\u003e f[Premium Member] c --\u003e g[Trial Member] ","date":"27-03-2022","objectID":"/posts/development/user-roles-naming/:2:0","tags":[],"title":"User Roles Naming","uri":"/posts/development/user-roles-naming/#example-role-hierarchy"},{"categories":["Development"],"collections":null,"content":"Explanation Site Visitor: A general user without an account. Former Member: A user who previously had an account but is now inactive. Registered Member: An active user with a registered account. Premium Member: A user with a paid or upgraded membership. Trial Member: A user on a temporary trial plan. Known Visitor: A non-registered user recognized through tracking mechanisms. Unknown Visitor: A completely anonymous user. ","date":"27-03-2022","objectID":"/posts/development/user-roles-naming/:2:1","tags":[],"title":"User Roles Naming","uri":"/posts/development/user-roles-naming/#explanation"},{"categories":["Development"],"collections":null,"content":"Conclusion Adopting structured and meaningful role names improves system usability, security, and scalability. Organizations should define roles carefully to align with business needs while maintaining clear distinctions in access levels. ","date":"27-03-2022","objectID":"/posts/development/user-roles-naming/:2:2","tags":[],"title":"User Roles Naming","uri":"/posts/development/user-roles-naming/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Bash scripting, there are various ways to retrieve yesterday\u0026rsquo;s date on both Mac and Ubuntu systems. Below, we\u0026rsquo;ll demonstrate two different methods for each operating system. ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:0:0","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Mac: ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:1:0","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#mac"},{"categories":["Development"],"collections":null,"content":"Method 1: Using date with -v option yesterday=$(date -v-1d +%F) echo \u0026#34;Yesterday\u0026#39;s date on Mac: $yesterday\u0026#34; In this method, we use the -v option with the date command to subtract 1 day from the current date and format it as %F, which gives the date in YYYY-MM-DD format. ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:1:1","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#method-1-using-date-with--v-option"},{"categories":["Development"],"collections":null,"content":"Method 2: Using date with -j option yesterday=$(date -j -f \u0026#34;%Y-%m-%d\u0026#34; -v-1d $(date +%Y-%m-%d) +%Y-%m-%d) echo \u0026#34;Yesterday\u0026#39;s date on Mac: $yesterday\u0026#34; This method involves a bit more complexity but allows you to specify the input date format (%Y-%m-%d) and retrieve yesterday\u0026rsquo;s date in the same format. ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:1:2","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#method-2-using-date-with--j-option"},{"categories":["Development"],"collections":null,"content":"Ubuntu: ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:2:0","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#ubuntu"},{"categories":["Development"],"collections":null,"content":"Method 1: Using date with \u0026ldquo;yesterday\u0026rdquo; string yesterday=$(date -d \u0026#34;yesterday\u0026#34; \u0026#39;+%Y-%m-%d\u0026#39;) echo \u0026#34;Yesterday\u0026#39;s date on Ubuntu: $yesterday\u0026#34; On Ubuntu, you can use the -d option with the \u0026ldquo;yesterday\u0026rdquo; string to easily obtain yesterday\u0026rsquo;s date in the YYYY-MM-DD format. ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:2:1","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#method-1-using-date-with-yesterday-string"},{"categories":["Development"],"collections":null,"content":"Method 2: Using date with specific time yesterday=$(date -d \u0026#34;yesterday 13:00\u0026#34; \u0026#39;+%Y-%m-%d\u0026#39;) echo \u0026#34;Yesterday\u0026#39;s date on Ubuntu (at 13:00): $yesterday\u0026#34; This method allows you to get yesterday\u0026rsquo;s date at a specific time, in this case, 13:00. You can adjust the time according to your requirements. Choose the method that best suits your needs and system, and you can easily retrieve yesterday\u0026rsquo;s date in Bash on both Mac and Ubuntu. ","date":"04-03-2022","objectID":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/:2:2","tags":null,"title":"How to Get Yesterday's Date in Bash on Mac and Ubuntu","uri":"/posts/development/how-to-get-yesterdays-date-in-bash-on-mac-and-ubuntu/#method-2-using-date-with-specific-time"},{"categories":["Development"],"collections":null,"content":"It looks like you\u0026rsquo;re trying to use the eval command in a Bash script to export variables loaded from a dotenv file using the shdotenv tool. This is a common practice for setting environment variables from a configuration file. Here\u0026rsquo;s an explanation of what this code does: shdotenv is likely a command or script that reads a dotenv file (usually named .env) and sets environment variables based on the key-value pairs defined in that file. Dotenv files are commonly used to store configuration variables for applications. $(shdotenv) is a subshell command substitution. It runs the shdotenv command and captures its standard output. eval is a Bash built-in command that evaluates and executes the command passed to it as a string. In this case, it\u0026rsquo;s used to execute the output of shdotenv as if it were a series of Bash commands. So, when you run eval $(shdotenv), it effectively loads the environment variables from your dotenv file and exports them into the current shell session. Here\u0026rsquo;s an example of what a .env file might look like: DATABASE_URL=mysql://username:password@localhost/mydatabase SECRET_KEY=mysecretkey DEBUG=trueRunning eval $(shdotenv) with the above .env file would set the DATABASE_URL, SECRET_KEY, and DEBUG environment variables in your shell session based on the values provided in the file. Please note that using eval to set environment variables can be powerful but should be used with caution, especially if the contents of the dotenv file are not trusted, as it can execute arbitrary code. Additionally, make sure the shdotenv tool is installed and properly configured in your environment for this code to work. ","date":"04-03-2022","objectID":"/posts/development/shell-bash-export-variable/:0:0","tags":null,"title":"Shell Bash Export variable","uri":"/posts/development/shell-bash-export-variable/#"},{"categories":["Development"],"collections":null,"content":"In Go (Golang), the interface{} type is an empty interface that can hold values of any type. It is often used when you need to work with values of unknown or varied types. Type assertions allow you to extract and work with the underlying concrete type of a value stored in an interface{}. Here, we\u0026rsquo;ll explore how to use interface{} and type assertions in Go. ","date":"26-02-2022","objectID":"/posts/development/golang-interface-and-type-assertions/:0:0","tags":null,"title":"Golang Interface{} and Type Assertions","uri":"/posts/development/golang-interface-and-type-assertions/#"},{"categories":["Development"],"collections":null,"content":"Storing Different Types in an interface{} You can store values of different types in an interface{}. Here\u0026rsquo;s an example: var data = map[string]interface{}{ \u0026#34;nama\u0026#34;: \u0026#34;john wick\u0026#34;, \u0026#34;grade\u0026#34;: 2, \u0026#34;height\u0026#34;: 156.5, \u0026#34;isMale\u0026#34;: true, \u0026#34;hobbies\u0026#34;: []string{\u0026#34;eating\u0026#34;, \u0026#34;sleeping\u0026#34;}, } In this map, we have values of various types, such as string, int, float64, bool, and a slice of strings, all stored in the interface{} type. ","date":"26-02-2022","objectID":"/posts/development/golang-interface-and-type-assertions/:0:1","tags":null,"title":"Golang Interface{} and Type Assertions","uri":"/posts/development/golang-interface-and-type-assertions/#storing-different-types-in-an-interface"},{"categories":["Development"],"collections":null,"content":"Extracting Values using Type Assertions You can use type assertions to extract and work with values of known types from an interface{}. Here\u0026rsquo;s how you can do it for the values in your data map: fmt.Println(data[\u0026#34;nama\u0026#34;].(string)) fmt.Println(data[\u0026#34;grade\u0026#34;].(int)) fmt.Println(data[\u0026#34;height\u0026#34;].(float64)) fmt.Println(data[\u0026#34;isMale\u0026#34;].(bool)) fmt.Println(data[\u0026#34;hobbies\u0026#34;].([]string)) In each line, we use type assertions to specify the expected type and extract the value from the interface{}. If the actual type doesn\u0026rsquo;t match the asserted type, it will panic at runtime. ","date":"26-02-2022","objectID":"/posts/development/golang-interface-and-type-assertions/:0:2","tags":null,"title":"Golang Interface{} and Type Assertions","uri":"/posts/development/golang-interface-and-type-assertions/#extracting-values-using-type-assertions"},{"categories":["Development"],"collections":null,"content":"Using a Type Switch for Dynamic Type Handling To handle values of unknown types dynamically, you can use a type switch. Here\u0026rsquo;s an example of how to use a type switch to print values based on their types: for _, val := range data { switch val.(type) { case string: fmt.Println(val.(string)) case int: fmt.Println(val.(int)) case float64: fmt.Println(val.(float64)) case bool: fmt.Println(val.(bool)) case []string: fmt.Println(val.([]string)) default: fmt.Println(\u0026#34;Unknown Type\u0026#34;) } } In this loop, the val.(type) expression checks the type of the value stored in val, and based on the type, it performs the appropriate action. If the type is unknown, it falls back to the default case. Remember that while using type assertions and type switches, it\u0026rsquo;s important to handle potential type mismatches or panics to ensure your code is robust. This is how you can work with interface{} and perform type assertions in Go to handle values of various types in a flexible manner. ","date":"26-02-2022","objectID":"/posts/development/golang-interface-and-type-assertions/:0:3","tags":null,"title":"Golang Interface{} and Type Assertions","uri":"/posts/development/golang-interface-and-type-assertions/#using-a-type-switch-for-dynamic-type-handling"},{"categories":["Development"],"collections":null,"content":"Introduction HTTP status codes are essential in REST APIs as they indicate the outcome of client requests. These standardized codes help clients understand whether their request was successful, requires further action, or encountered an error. Below is an overview of key HTTP status codes categorized by their respective classes. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:1:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#introduction"},{"categories":["Development"],"collections":null,"content":"1xx: Informational Responses These status codes indicate that the request has been received and is being processed. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:2:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#1xx-informational-responses"},{"categories":["Development"],"collections":null,"content":"2xx: Success Responses Successful responses indicate that the request was received, understood, and accepted. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:3:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#2xx-success-responses"},{"categories":["Development"],"collections":null,"content":"200 (OK) The request was successful, and the server returned the requested data. The response body depends on the HTTP method: GET: Returns the requested resource. HEAD: Returns headers without a response body. POST: Returns a description of the result. TRACE: Returns the received request message. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:3:1","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#200-ok"},{"categories":["Development"],"collections":null,"content":"201 (Created) Indicates that a new resource has been created. The response includes a Location header specifying the URI of the newly created resource. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:3:2","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#201-created"},{"categories":["Development"],"collections":null,"content":"202 (Accepted) Indicates that the request has been accepted for processing but is not yet complete. The response may include status information or a pointer to a status monitor. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:3:3","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#202-accepted"},{"categories":["Development"],"collections":null,"content":"204 (No Content) The request was successful, but no content is returned. Often used for PUT, POST, or DELETE requests. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:3:4","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#204-no-content"},{"categories":["Development"],"collections":null,"content":"3xx: Redirection Responses These status codes indicate that further action is required to complete the request. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#3xx-redirection-responses"},{"categories":["Development"],"collections":null,"content":"301 (Moved Permanently) The requested resource has been permanently moved to a new URI. The Location header contains the new URI. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:1","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#301-moved-permanently"},{"categories":["Development"],"collections":null,"content":"302 (Found) The resource is temporarily moved. The client should use the URI in the Location header but continue using the original URI for future requests. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:2","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#302-found"},{"categories":["Development"],"collections":null,"content":"303 (See Other) Indicates that the response can be retrieved from another URI using a GET request. Commonly used after POST operations. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:3","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#303-see-other"},{"categories":["Development"],"collections":null,"content":"304 (Not Modified) Indicates that the requested resource has not changed since the last request. No response body is included, saving bandwidth. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:4","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#304-not-modified"},{"categories":["Development"],"collections":null,"content":"307 (Temporary Redirect) Similar to 302, but ensures the HTTP method remains unchanged during redirection. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:4:5","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#307-temporary-redirect"},{"categories":["Development"],"collections":null,"content":"4xx: Client Error Responses Indicate issues with the client\u0026rsquo;s request. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#4xx-client-error-responses"},{"categories":["Development"],"collections":null,"content":"400 (Bad Request) The request is malformed or contains invalid parameters. The client must modify the request before retrying. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:1","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#400-bad-request"},{"categories":["Development"],"collections":null,"content":"401 (Unauthorized) Authentication is required, or provided credentials are invalid. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:2","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#401-unauthorized"},{"categories":["Development"],"collections":null,"content":"403 (Forbidden) The request is valid, but the client lacks necessary permissions. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:3","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#403-forbidden"},{"categories":["Development"],"collections":null,"content":"404 (Not Found) The requested resource is not found. The client can retry if the resource might be available later. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:4","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#404-not-found"},{"categories":["Development"],"collections":null,"content":"405 (Method Not Allowed) The resource does not support the HTTP method used. The response includes an Allow header listing supported methods. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:5","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#405-method-not-allowed"},{"categories":["Development"],"collections":null,"content":"406 (Not Acceptable) Indicates that the server cannot produce a response matching the client’s Accept header. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:6","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#406-not-acceptable"},{"categories":["Development"],"collections":null,"content":"412 (Precondition Failed) Indicates that one or more preconditions in the request headers were not met. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:7","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#412-precondition-failed"},{"categories":["Development"],"collections":null,"content":"415 (Unsupported Media Type) The server does not support the request\u0026rsquo;s Content-Type. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:5:8","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#415-unsupported-media-type"},{"categories":["Development"],"collections":null,"content":"5xx: Server Error Responses Indicate problems on the server side. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:6:0","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#5xx-server-error-responses"},{"categories":["Development"],"collections":null,"content":"500 (Internal Server Error) A generic error indicating an unexpected server issue. Clients can retry the request. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:6:1","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#500-internal-server-error"},{"categories":["Development"],"collections":null,"content":"501 (Not Implemented) Indicates that the server does not support the requested functionality. Understanding these status codes is essential for effectively working with REST APIs, ensuring smooth communication between clients and servers. ","date":"25-02-2022","objectID":"/posts/development/rest-api-status-code-example/:6:2","tags":["rest","api","json"],"title":"REST API Status Code Example","uri":"/posts/development/rest-api-status-code-example/#501-not-implemented"},{"categories":["Development"],"collections":null,"content":"Overview This document provides an example of a successful response from a REST API when creating a new resource. The response follows a structured format, including essential details such as the resource ID, attributes, timestamps, and metadata. ","date":"22-02-2022","objectID":"/posts/development/rest-api-success-response-example-/:1:0","tags":["api","json","rest"],"title":"REST API Success Response Example","uri":"/posts/development/rest-api-success-response-example-/#overview"},{"categories":["Development"],"collections":null,"content":"Response Breakdown ","date":"22-02-2022","objectID":"/posts/development/rest-api-success-response-example-/:2:0","tags":["api","json","rest"],"title":"REST API Success Response Example","uri":"/posts/development/rest-api-success-response-example-/#response-breakdown"},{"categories":["Development"],"collections":null,"content":"1. Data Section The data object contains the primary resource details. ID: 1048 – A unique identifier for the newly created resource. Attributes: Contains specific information about the resource. Name: \u0026quot;minima-eaque-et\u0026quot; – The assigned name of the resource. Title: null – No title was provided during creation. Description: null – No description was provided. Created At: \u0026quot;2022-02-22T15:05:46.855Z\u0026quot; – The timestamp when the resource was created. Updated At: \u0026quot;2022-02-22T15:05:46.855Z\u0026quot; – The timestamp when the resource was last modified. Published At: \u0026quot;2022-02-22T15:05:46.853Z\u0026quot; – The timestamp when the resource was published. ","date":"22-02-2022","objectID":"/posts/development/rest-api-success-response-example-/:2:1","tags":["api","json","rest"],"title":"REST API Success Response Example","uri":"/posts/development/rest-api-success-response-example-/#1-data-section"},{"categories":["Development"],"collections":null,"content":"2. Meta Section The meta object is included but empty ({}), indicating that no additional metadata was provided. ","date":"22-02-2022","objectID":"/posts/development/rest-api-success-response-example-/:2:2","tags":["api","json","rest"],"title":"REST API Success Response Example","uri":"/posts/development/rest-api-success-response-example-/#2-meta-section"},{"categories":["Development"],"collections":null,"content":"Full JSON Response Example Below is the exact JSON response returned by the API: ","date":"22-02-2022","objectID":"/posts/development/rest-api-success-response-example-/:3:0","tags":["api","json","rest"],"title":"REST API Success Response Example","uri":"/posts/development/rest-api-success-response-example-/#full-json-response-example"},{"categories":["Development"],"collections":null,"content":"REST APIs should provide clear and structured error responses to help clients understand and resolve issues effectively. Below are examples of different types of error responses that an API might return. ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:0:0","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#"},{"categories":["Development"],"collections":null,"content":"Single Error Responses ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:0","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#single-error-responses"},{"categories":["Development"],"collections":null,"content":"Incorrect Username or Password ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:1","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#incorrect-username-or-password"},{"categories":["Development"],"collections":null,"content":"Validation Errors Title Must Be Defined Name Must Be At Most 50 Characters This Attribute Must Be Unique ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#validation-errors"},{"categories":["Development"],"collections":null,"content":"Validation Errors Title Must Be Defined Name Must Be At Most 50 Characters This Attribute Must Be Unique ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#title-must-be-defined"},{"categories":["Development"],"collections":null,"content":"Validation Errors Title Must Be Defined Name Must Be At Most 50 Characters This Attribute Must Be Unique ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#name-must-be-at-most-50-characters"},{"categories":["Development"],"collections":null,"content":"Validation Errors Title Must Be Defined Name Must Be At Most 50 Characters This Attribute Must Be Unique ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:1:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#this-attribute-must-be-unique"},{"categories":["Development"],"collections":null,"content":"Authentication Errors ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:2:0","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#authentication-errors"},{"categories":["Development"],"collections":null,"content":"No Authorization Provided ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:2:1","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#no-authorization-provided"},{"categories":["Development"],"collections":null,"content":"Invalid Authentication ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:2:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#invalid-authentication"},{"categories":["Development"],"collections":null,"content":"Multiple Errors ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:3:0","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#multiple-errors"},{"categories":["Development"],"collections":null,"content":"Name and Title Must Be Defined ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:3:1","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#name-and-title-must-be-defined"},{"categories":["Development"],"collections":null,"content":"Field-Specific Validation Errors ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:4:0","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#field-specific-validation-errors"},{"categories":["Development"],"collections":null,"content":"Field Too Short ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:4:1","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#field-too-short"},{"categories":["Development"],"collections":null,"content":"Password and Confirm Password Must Match ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:4:2","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#password-and-confirm-password-must-match"},{"categories":["Development"],"collections":null,"content":"Incorrect Email Format These error responses provide a consistent structure, making it easier for clients to handle errors programmatically and display meaningful messages to users. ","date":"22-02-2022","objectID":"/posts/development/rest-api-error-response-example/:4:3","tags":["api","json","rest"],"title":"REST API Error Response Example","uri":"/posts/development/rest-api-error-response-example/#incorrect-email-format"},{"categories":["Development"],"collections":null,"content":"You are dealing with error handling in Go when working with PostgreSQL using the pq package. The code you provided demonstrates two different ways to handle and extract error information from a pq.Error type. Let\u0026rsquo;s break down both of these code snippets: Using Type Assertion: pqErr := err.(*pq.Error) log.Println(pqErr.Code) In this code, you are using a type assertion to check if the err is of type *pq.Error, and if it is, you extract the Code field from the pq.Error struct and log it. This approach assumes that err is a pq.Error type, and if it\u0026rsquo;s not, it will result in a runtime panic. So, it\u0026rsquo;s essential to be sure that err is indeed of type *pq.Error before using this approach. Using Type Assertion with Ok Idiom: if err, ok := err.(*pq.Error); ok { fmt.Println(err.Code) } This code is similar to the first one but incorporates the \u0026ldquo;comma, ok\u0026rdquo; idiom. It first attempts to perform a type assertion to check if err is of type *pq.Error. If the assertion is successful (i.e., ok is true), it prints the Code field. This approach is safer because it doesn\u0026rsquo;t panic if the type assertion fails. In both cases, err.Code is used to access the error code associated with the pq.Error instance. PostgreSQL error codes provide specific information about the error that occurred during a database operation. You can use these error codes to handle different types of errors gracefully in your Go application. Make sure to import the pq package at the beginning of your Go file like this: import ( \u0026#34;database/sql\u0026#34; \u0026#34;github.com/lib/pq\u0026#34; // other imports ) Additionally, ensure that you have imported the required packages and have properly set up your PostgreSQL database connection before using these error-handling methods. ","date":"22-02-2022","objectID":"/posts/development/golang-go-get-postgres-error/:0:0","tags":null,"title":"Golang Go Get Postgres Error","uri":"/posts/development/golang-go-get-postgres-error/#"},{"categories":["Development"],"collections":null,"content":"In Bash scripting, you can use the || true construct to ignore errors for a particular command or script. This is a common technique used to ensure that a script continues executing even if a specific command fails. Here\u0026rsquo;s how it works: particular_script || true In this example, particular_script is the command or script that you want to run, and || true is added at the end of the command. The || operator is used for conditional execution. It means that if particular_script fails (returns a non-zero exit status), the true command will always execute, effectively ignoring the error and allowing the script to continue running. Here\u0026rsquo;s a more complete example with multiple commands: particular_script || true next_script In this case, if particular_script fails, the error will be ignored, and the next_script will still be executed. This can be useful in situations where you want to ensure that a script continues to run even if some of its commands encounter non-fatal errors. However, it\u0026rsquo;s essential to use this construct judiciously, as ignoring errors can lead to unexpected behavior if not handled properly in your script logic. ","date":"21-02-2022","objectID":"/posts/development/bash-shell-ignore-error-on-particular-command/:0:0","tags":null,"title":"Bash Shell Ignore Error on particular Command","uri":"/posts/development/bash-shell-ignore-error-on-particular-command/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re facing issues with Delve (dlv) not restarting when updating files in your Go project using Cosmtrek/Air version 1.27.10 and encountering a \u0026ldquo;port address already in use\u0026rdquo; error, there are a couple of solutions you can try. This problem often occurs due to lingering Delve processes that prevent the tool from restarting properly. ","date":"21-02-2022","objectID":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/:0:0","tags":null,"title":"Delve Dlv Not Restart When Update Files On Golang Cosmtrek Air 12710…","uri":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/#"},{"categories":["Development"],"collections":null,"content":"Solution 1: Revert to a Previous Version You can revert to a previous version of Cosmtrek/Air where this issue might not exist. To do this, follow these steps: Open your terminal. Run the following command to install an earlier version (e.g., v1.27.4) of Cosmtrek/Air: curl -sSfL https://raw.githubusercontent.com/cosmtrek/air/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.27.4 This will replace the current version of Cosmtrek/Air with the specified version (v1.27.4 in this case). After the installation is complete, try running your project again with the older version of Air to see if the issue is resolved. ","date":"21-02-2022","objectID":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/:1:0","tags":null,"title":"Delve Dlv Not Restart When Update Files On Golang Cosmtrek Air 12710…","uri":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/#solution-1-revert-to-a-previous-version"},{"categories":["Development"],"collections":null,"content":"Solution 2: Modify .air.toml to Kill Delve Processes Another approach is to modify your .air.toml configuration to ensure that any lingering Delve processes are killed before starting a new one. Here\u0026rsquo;s how you can do it: Open your .air.toml file in a text editor. Locate the full_bin configuration in your .air.toml file. It should look something like this: full_bin = \u0026#34;dlv exec --accept-multiclient --log --headless --continue --listen :2345 --api-version 2 ./tmp/main\u0026#34; Modify it to include the following command to kill any existing Delve (dlv) or main processes before starting a new one: full_bin = \u0026#34;pkill -9 \u0026#39;dlv|main\u0026#39;; sleep 0.1; dlv exec --accept-multiclient --log --headless --continue --listen :2345 --api-version 2 ./tmp/main\u0026#34; This modification ensures that any existing Delve or main processes are forcefully terminated before attempting to start a new one. Save your .air.toml file. Try running your project with Air again to see if the issue is resolved. The modified configuration should now properly restart Delve. By implementing one of these solutions, you should be able to resolve the issue of Delve not restarting when updating files in your Go project using Cosmtrek/Air 1.27.10. ","date":"21-02-2022","objectID":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/:2:0","tags":null,"title":"Delve Dlv Not Restart When Update Files On Golang Cosmtrek Air 12710…","uri":"/posts/development/delve-dlv-not-restarting-when-updating-files-in-go-using-cosmtrekair-12710/#solution-2-modify-airtoml-to-kill-delve-processes"},{"categories":["Development"],"collections":null,"content":"API testing is a crucial part of software quality assurance, ensuring that APIs function correctly, securely, and efficiently. This guide details key test actions, test scenario categories, and test flows to ensure a thorough validation of API behavior. ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:0:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#"},{"categories":["Development"],"collections":null,"content":"API Test Actions Each API test involves several key actions: Verify Correct HTTP Status Code: Ensure the correct status code is returned (e.g., 201 CREATED for resource creation, 403 FORBIDDEN for unauthorized requests). Verify Response Payload: Validate JSON structure, field names, types, and values, including error responses. Verify Response Headers: Check headers for security and performance compliance. Verify Correct Application State (Optional): Validate state changes, especially for manual tests with UI interaction. Verify Basic Performance Sanity: Ensure response times meet performance expectations. ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:1:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#api-test-actions"},{"categories":["Development"],"collections":null,"content":"Test Scenario Categories API tests fall into several broad categories: Basic Positive Tests (Happy Paths): Verify basic functionality using valid required parameters. Positive Tests with Optional Parameters: Test optional parameters like filtering, sorting, and pagination. Negative Testing – Valid Input: Ensure the API handles operations correctly when using valid but incorrect data (e.g., attempting to delete a non-existent resource). Negative Testing – Invalid Input: Test missing parameters, incorrect values, invalid authentication tokens, and unsupported methods. Destructive Testing: Attempt to break the API with malformed content, overflows, boundary value testing, incorrect headers, and concurrency tests. ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:2:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#test-scenario-categories"},{"categories":["Development"],"collections":null,"content":"Test Flows API testing consists of three main test flows: Testing Requests in Isolation: Execute single API requests and validate responses. Multi-Step Workflow Testing: Validate a sequence of API interactions (e.g., create, retrieve, update, delete a resource). Combined API and Web UI Testing: Verify data consistency between API actions and UI state. ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:3:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#test-flows"},{"categories":["Development"],"collections":null,"content":"Example Test Cases ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:4:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#example-test-cases"},{"categories":["Development"],"collections":null,"content":"Basic Positive Tests (Happy Paths) # Test Scenario Category Test Action Category Description 1 Basic Positive Tests Status Code Ensure 2XX responses for valid requests (200 for GET, 201 for POST, etc.) 2 Payload Validation Validate JSON structure, fields, and values against schema 3 State Validation Ensure expected state changes occur 4 Header Validation Verify expected headers are present and correct 5 Performance Check Validate response time within limits ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:4:1","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#basic-positive-tests-happy-paths"},{"categories":["Development"],"collections":null,"content":"Negative Testing – Invalid Input # Test Scenario Category Test Action Category Description 1 Negative Testing Status Code Ensure error status codes for invalid input 2 Payload Validation Check error messages and response format 3 Header Validation Ensure expected security headers are in place 4 Performance Check Validate timely failure response ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:4:2","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#negative-testing--invalid-input"},{"categories":["Development"],"collections":null,"content":"Conclusion A well-structured API test plan ensures APIs are functional, secure, and performant. By covering various test scenarios and flows, teams can identify potential issues early and maintain robust API functionality. ","date":"19-02-2022","objectID":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/:5:0","tags":["api","json","rest"],"title":"REST API Testing Strategy What Exactly Should You Test","uri":"/posts/development/rest-api-testing-strategy-what-exactly-should-you-test/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Bash scripting, it\u0026rsquo;s often useful to determine the directory where the script file is located. This can be particularly important if your script needs to access other files or resources relative to its own location. Here\u0026rsquo;s a Bash script snippet that accomplishes this task: #!/bin/bash # Get the directory of the script file SCRIPT_DIR=$(cd -- \u0026#34;$(dirname -- \u0026#34;${BASH_SOURCE[0]}\u0026#34;)\u0026#34; \u0026amp;\u0026gt; /dev/null \u0026amp;\u0026amp; pwd) # Check if SCRIPT_DIR is empty or not if [ -z \u0026#34;$SCRIPT_DIR\u0026#34; ]; then echo \u0026#34;Failed to determine the script directory.\u0026#34; exit 1 fi # Now, you can use SCRIPT_DIR for your operations echo \u0026#34;The script is located in the directory: $SCRIPT_DIR\u0026#34; Here\u0026rsquo;s a breakdown of what this script does: #!/bin/bash: This line specifies that the script should be interpreted using the Bash shell. SCRIPT_DIR=$(cd -- \u0026quot;$(dirname -- \u0026quot;${BASH_SOURCE[0]}\u0026quot;)\u0026quot; \u0026amp;\u0026gt; /dev/null \u0026amp;\u0026amp; pwd): This line uses a combination of commands to determine the directory of the currently executing script. Let\u0026rsquo;s break it down step by step: ${BASH_SOURCE[0]}: This variable represents the path to the currently executing script. dirname -- \u0026quot;${BASH_SOURCE[0]}\u0026quot;: This extracts the directory path containing the script file. cd -- \u0026quot;$(dirname -- \u0026quot;${BASH_SOURCE[0]}\u0026quot;)\u0026quot;: This changes the current working directory to the script\u0026rsquo;s directory. The -- is used to handle directory names that start with dashes. pwd: This command then prints the current working directory, which is now the script\u0026rsquo;s directory, and stores it in the SCRIPT_DIR variable. if [ -z \u0026quot;$SCRIPT_DIR\u0026quot; ]; then: This line checks if SCRIPT_DIR is empty, indicating that the script failed to determine its directory. echo \u0026quot;The script is located in the directory: $SCRIPT_DIR\u0026quot;: If SCRIPT_DIR is not empty, it prints the directory where the script is located. You can now use the $SCRIPT_DIR variable for any operations that require the script\u0026rsquo;s directory. ","date":"17-02-2022","objectID":"/posts/development/bash-script-to-get-the-directory-of-the-script-file/:0:0","tags":null,"title":"Bash Script to Get the Directory of the Script File","uri":"/posts/development/bash-script-to-get-the-directory-of-the-script-file/#"},{"categories":["Development"],"collections":null,"content":"This article explores best practices for naming Git repositories, ensuring clarity, consistency, and maintainability. It covers key considerations such as readability, versioning, project scope, and collaboration standards to help developers create effective repository names. ","date":"17-02-2022","objectID":"/posts/development/git-repository-naming/:0:0","tags":["git"],"title":"Git Repository Naming","uri":"/posts/development/git-repository-naming/#"},{"categories":["Development"],"collections":null,"content":"Microservices Better suited to a project team or department where multiple products exist and are made up of sub-components. [product/project name]-[purpose]-[framework/language] e.g. myproject-api-rails","date":"17-02-2022","objectID":"/posts/development/git-repository-naming/:1:0","tags":["git"],"title":"Git Repository Naming","uri":"/posts/development/git-repository-naming/#microservices"},{"categories":["Development"],"collections":null,"content":"For Open Source [language/framework]-[product/project] e.g. python-security-scripts","date":"17-02-2022","objectID":"/posts/development/git-repository-naming/:2:0","tags":["git"],"title":"Git Repository Naming","uri":"/posts/development/git-repository-naming/#for-open-source"},{"categories":["Development"],"collections":null,"content":"Less Used [product/project name]-[purpose] e.g. myproject-rest-api","date":"17-02-2022","objectID":"/posts/development/git-repository-naming/:3:0","tags":["git"],"title":"Git Repository Naming","uri":"/posts/development/git-repository-naming/#less-used"},{"categories":["Development"],"collections":null,"content":"Your provided code snippet appears to be a Bash script that checks if the TARGET_PATH environment variable is empty and, if so, sets it to ~/go by appending an export statement to the .bashrc file. This is a common technique to ensure that environment variables are set with default values if they are not already defined. Here\u0026rsquo;s a breakdown of what the code does: if [[ -z \u0026quot;${TARGET_PATH}\u0026quot; ]]; then: This line checks if the TARGET_PATH environment variable is empty (i.e., its value is not set). The -z flag is used to test if a string is empty. If the TARGET_PATH is empty, the script proceeds to the next line. echo 'export TARGET_PATH=~/go' \u0026gt;\u0026gt; .bashrc: This line appends the export statement export TARGET_PATH=~/go to the .bashrc file. This effectively sets the TARGET_PATH environment variable to ~/go when the user\u0026rsquo;s shell session starts. This code is useful for ensuring that the TARGET_PATH environment variable is always defined with a default value when a user\u0026rsquo;s shell session starts. It\u0026rsquo;s commonly used to set default values for environment variables, making it easier to work with scripts and programs that depend on them. Keep in mind that modifying the .bashrc file will affect the behavior of the user\u0026rsquo;s shell session. Ensure that this is the desired behavior for your use case and that users are aware of the changes to their environment variables. ","date":"15-02-2022","objectID":"/posts/development/shell-bash-check-if-environment-exist/:0:0","tags":null,"title":"Shell Bash Check If Environment Exist","uri":"/posts/development/shell-bash-check-if-environment-exist/#"},{"categories":["Development"],"collections":null,"content":"It looks like you are trying to check if a specific directory is included in the PATH environment variable in a Bash script. The code you provided is almost correct, but it has a small issue. You can modify it as follows to make it work correctly: CHECK_PATH=\u0026#34;/root/go/bin\u0026#34; if [[ \u0026#34;:$PATH:\u0026#34; == *\u0026#34;:$CHECK_PATH:\u0026#34;* ]]; then echo \u0026#34;Path found in PATH environment. Skipping configuration...\u0026#34; else echo \u0026#34;Path not found in PATH environment. You may need to add it.\u0026#34; fi Here\u0026rsquo;s a breakdown of the changes made: Enclosed the CHECK_PATH variable in double quotes to ensure it handles paths with spaces or special characters correctly. Used double brackets [[ ... ]] for conditional testing. Added colons : before and after both PATH and CHECK_PATH to ensure accurate matching. Provided appropriate messages for both cases - when the path is found in PATH and when it\u0026rsquo;s not found. With these modifications, the script should correctly check whether the specified path is in the PATH environment variable and provide the corresponding message. ","date":"15-02-2022","objectID":"/posts/development/shell-bash-check-path-environment-exist/:0:0","tags":null,"title":"Shell Bash Check PATH environment exist","uri":"/posts/development/shell-bash-check-path-environment-exist/#"},{"categories":["Development"],"collections":null,"content":"In software architecture, adhering to the Single Responsibility Principle (SRP) is crucial to maintaining clean, modular, and maintainable code. One common mistake is merging business models with business logic, leading to unnecessary dependencies and reduced scalability. This article explores how to properly separate these concerns using an MVC-based approach for a Car class and its related components. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:0:0","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#"},{"categories":["Development"],"collections":null,"content":"Issue: Business Model Breaking SRP A Car class should represent a data model rather than handle business logic or external service interactions. However, integrating dependencies like IFileSystemService into the Car class blurs the lines between the model and the business logic, leading to tightly coupled code that is harder to maintain and extend. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:1:0","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#issue-business-model-breaking-srp"},{"categories":["Development"],"collections":null,"content":"Proposed Solution: Layered Architecture To enforce SRP, we should implement a layered structure where each component has a distinct responsibility. The following diagram illustrates the appropriate separation of concerns: flowchart TD CarController --\u003e CarService CarService --\u003e ImageService ImageService --\u003e Image CarService --\u003e CarRepository CarRepository --\u003e Car CarService --- Info(Gives controllera model thatincludes animage and a car) flowchart TD CarController --\u003e CarService CarService --\u003e ImageService ImageService --\u003e Image CarService --\u003e CarRepository CarRepository --\u003e Car CarService --- Info(Gives controllera model thatincludes animage and a car) flowchart TD CarController --\u003e CarService CarService --\u003e ImageService ImageService --\u003e Image CarService --\u003e CarRepository CarRepository --\u003e Car CarService --- Info(Gives controllera model thatincludes animage and a car) flowchart TD CarController --\u003e CarService CarService --\u003e ImageService ImageService --\u003e Image CarService --\u003e CarRepository CarRepository --\u003e Car CarService --- Info(Gives controllera model thatincludes animage and a car) ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:2:0","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#proposed-solution-layered-architecture"},{"categories":["Development"],"collections":null,"content":"Responsibilities of Each Component: CarController: Only interacts with CarService, ensuring that it remains unaware of implementation details. CarService: Puts all necessary data together (e.g., fetching a Car model and its corresponding image). CarRepository: Handles Car data persistence. ImageService: Manages image-related operations separately from the Car model. Image: Represents an image as a distinct model, ensuring separation from Car. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:2:1","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#responsibilities-of-each-component"},{"categories":["Development"],"collections":null,"content":"Improving Method Naming for Clarity Several method names in the current design suggest unclear responsibilities: ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:3:0","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#improving-method-naming-for-clarity"},{"categories":["Development"],"collections":null,"content":"Incorrect Naming Conventions: public Car GetPictureForCar(int carId); Issue: The method name implies it returns an image, but it returns a Car. Expected Fix: Return an Image object instead. Car myCar = this.GetFileForCar(carId); Issue: A method that suggests retrieving a file should not return a Car. Expected Fix: It should return either a File object or a string representing the file name. Image imgfile = _Repository.GetPhysicalFileLocation(carId); Issue: A method named GetPhysicalFileLocation should return a string, not an Image. Expected Fix: Rename or refactor to return a string file path. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:3:1","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#incorrect-naming-conventions"},{"categories":["Development"],"collections":null,"content":"Naming Conventions and Private Field Formatting In C#, private fields should use camelCase: private readonly IRepository _repository; Correcting _Repository to _repository follows standard C# naming conventions. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:3:2","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#naming-conventions-and-private-field-formatting"},{"categories":["Development"],"collections":null,"content":"Conclusion By ensuring that the Car class remains a pure model and delegating responsibilities to appropriate services, we achieve a cleaner and more maintainable architecture. The CarService acts as the intermediary, orchestrating data retrieval and transformation without violating SRP. Implementing clear method names further enhances readability and predictability, making the system easier to understand and extend. ","date":"14-02-2022","objectID":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/:4:0","tags":null,"title":"Separating Business Model and Logic Ensuring SRP in Car Class Design","uri":"/posts/development/separating-business-model-and-logic-ensuring-srp-in-car-class-design/#conclusion"},{"categories":["Development"],"collections":null,"content":"Sometimes in React, you may need to render empty space or create gaps between elements. There are several ways to achieve this, and one common method is to use HTML entities like \u0026amp;nbsp; within your JSX code. In this article, we\u0026rsquo;ll explore how to render empty space in React using various techniques. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:0:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#"},{"categories":["Development"],"collections":null,"content":"1. Using HTML Entities As mentioned in the question, you can use HTML entities like \u0026amp;nbsp; to render empty space in React components. Here\u0026rsquo;s an example: \u0026lt;span\u0026gt;\u0026amp;nbsp;\u0026amp;nbsp;\u0026lt;/span\u0026gt; This code will render two non-breaking space characters, creating a visible gap on your web page. You can adjust the number of \u0026amp;nbsp; entities to control the amount of empty space. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:1:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#1-using-html-entities"},{"categories":["Development"],"collections":null,"content":"2. Using CSS Another way to create empty space is by using CSS. You can apply margins or padding to elements to add space around them. Here\u0026rsquo;s an example of how to use CSS to add space around a \u0026lt;div\u0026gt; element: \u0026lt;div style={{ margin: \u0026#39;10px\u0026#39; }}\u0026gt;Content with space\u0026lt;/div\u0026gt; In this example, a 10-pixel margin will be added around the \u0026lt;div\u0026gt;, creating empty space. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:2:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#2-using-css"},{"categories":["Development"],"collections":null,"content":"3. Using Empty JSX Tags You can also create empty space by using empty JSX tags. For instance, you can use an empty \u0026lt;div\u0026gt; or \u0026lt;span\u0026gt; tag with a specific height or width style property: \u0026lt;div style={{ height: \u0026#39;20px\u0026#39; }}\u0026gt;\u0026lt;/div\u0026gt; This code will render an empty \u0026lt;div\u0026gt; with a height of 20 pixels, creating vertical empty space. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:3:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#3-using-empty-jsx-tags"},{"categories":["Development"],"collections":null,"content":"4. Using CSS Classes You can define CSS classes that apply specific styles for creating empty space and then apply those classes to your elements. Here\u0026rsquo;s an example: // In your CSS file .empty-space { margin: 10px; } // In your React component \u0026lt;div className=\u0026#34;empty-space\u0026#34;\u0026gt;Content with space\u0026lt;/div\u0026gt; By using CSS classes, you can maintain consistency in your styling and easily adjust the amount of empty space. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:4:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#4-using-css-classes"},{"categories":["Development"],"collections":null,"content":"Conclusion Rendering empty space in React can be achieved through various methods, including using HTML entities, CSS, empty JSX tags, or CSS classes. The choice of method depends on your specific use case and styling preferences. Experiment with these techniques to create the desired spacing in your React components. ","date":"11-02-2022","objectID":"/posts/development/rendering-empty-space-in-react/:5:0","tags":null,"title":"Rendering Empty Space in React","uri":"/posts/development/rendering-empty-space-in-react/#conclusion"},{"categories":["Development"],"collections":null,"content":"In the world of Go programming, adhering to idiomatic coding practices is highly valued. One of these practices pertains to the naming of collection repository or folder names. The Go community recommends using the singular form for such names, and this convention is followed consistently in various Go projects. Let\u0026rsquo;s take a closer look at this best practice. ","date":"10-02-2022","objectID":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/:0:0","tags":null,"title":"Using Singular Form for Collection Repo/Folder Name in Idiomatic Go","uri":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/#"},{"categories":["Development"],"collections":null,"content":"Singular vs. Plural When organizing your Go packages into repositories or folders, you\u0026rsquo;ll often find situations where you have multiple related packages that belong to a common category. Examples could include packages for various utilities, components, or modules. In such cases, it\u0026rsquo;s recommended to use the singular form for the repository or folder name. ","date":"10-02-2022","objectID":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/:1:0","tags":null,"title":"Using Singular Form for Collection Repo/Folder Name in Idiomatic Go","uri":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/#singular-vs-plural"},{"categories":["Development"],"collections":null,"content":"Examples of Using Singular Form Here are some examples of Go repositories or folders using the singular form: github.com/golang/example/hello github.com/golang/example/outyet golang.org/x/mobile/example/basic golang.org/x/mobile/example/flappy golang.org/x/image/\u0026hellip; github.com/shurcooL/tictactoe/player/bad github.com/shurcooL/tictactoe/player/random In these examples, you can see that the folder names like \u0026ldquo;example,\u0026rdquo; \u0026ldquo;image,\u0026rdquo; and \u0026ldquo;player\u0026rdquo; use the singular form despite containing multiple related packages. This consistency in naming makes it easier for developers to navigate and understand the project\u0026rsquo;s structure. ","date":"10-02-2022","objectID":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/:2:0","tags":null,"title":"Using Singular Form for Collection Repo/Folder Name in Idiomatic Go","uri":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/#examples-of-using-singular-form"},{"categories":["Development"],"collections":null,"content":"Avoiding Plural Forms To maintain this naming convention, it\u0026rsquo;s essential to avoid using plural forms for collection repository or folder names. Here are some examples of what not to do: github.com/golang/examples/hello github.com/golang/examples/outyet golang.org/x/mobile/examples/basic golang.org/x/mobile/examples/flappy golang.org/x/images/\u0026hellip; github.com/shurcooL/tictactoe/players/bad github.com/shurcooL/tictactoe/players/random In these cases, you can observe the use of plural forms for folder names, which goes against the recommended Go idiomatic style. ","date":"10-02-2022","objectID":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/:3:0","tags":null,"title":"Using Singular Form for Collection Repo/Folder Name in Idiomatic Go","uri":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/#avoiding-plural-forms"},{"categories":["Development"],"collections":null,"content":"Consistency Matters The use of singular forms for collection repository or folder names is not just a matter of convention; it\u0026rsquo;s also about maintaining consistency within the Go ecosystem. By adhering to this practice, you not only make your codebase more approachable but also align with the style that the Go project itself follows. In summary, when organizing your Go packages into repositories or folders, consider using the singular form for collection names. This practice promotes clarity, consistency, and adherence to the idiomatic Go style, making your code more accessible to other developers and contributing to the overall Go community\u0026rsquo;s best practices. ","date":"10-02-2022","objectID":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/:4:0","tags":null,"title":"Using Singular Form for Collection Repo/Folder Name in Idiomatic Go","uri":"/posts/development/using-singular-form-for-collection-repofolder-name-in-idiomatic-go/#consistency-matters"},{"categories":["Development"],"collections":null,"content":"If you want to hide menu bar icons on your Mac, you can use the \u0026ldquo;Hidden Bar\u0026rdquo; app from the Mac App Store. Here\u0026rsquo;s a step-by-step guide on how to do this: Open the Mac App Store: Click on the App Store icon in your Dock or use Spotlight search (Cmd + Space, then type \u0026ldquo;App Store\u0026rdquo;) to open the Mac App Store. Search for Hidden Bar: In the App Store, use the search bar in the top right corner to search for \u0026ldquo;Hidden Bar.\u0026rdquo; Download Hidden Bar: When you find the Hidden Bar app in the search results, click on it to open the app\u0026rsquo;s page. Then, click the \u0026ldquo;Get\u0026rdquo; or \u0026ldquo;Download\u0026rdquo; button to install the app. You may need to enter your Apple ID password or use Touch ID/Face ID to confirm the download. Open Hidden Bar: Once Hidden Bar is downloaded and installed, you can open it from Launchpad or by searching for \u0026ldquo;Hidden Bar\u0026rdquo; in Spotlight. Customize Menu Bar: Hidden Bar will appear in your menu bar as a small icon (usually three dots or lines). Click on this icon to open Hidden Bar. Add Apps to Hide: In Hidden Bar, you\u0026rsquo;ll see a list of apps with checkboxes next to them. Check the boxes next to the apps whose menu bar icons you want to hide. Hide Icons: After selecting the apps, click the \u0026ldquo;Save\u0026rdquo; or \u0026ldquo;Apply\u0026rdquo; button in Hidden Bar. This will hide the menu bar icons for the selected apps. Adjust Settings: Hidden Bar also allows you to adjust settings like hiding the Hidden Bar icon itself, changing its position, and more. Explore the app\u0026rsquo;s settings to customize it to your liking. Restart Apps: In some cases, you may need to restart the apps you\u0026rsquo;ve hidden icons for to see the changes take effect. To do this, simply quit and reopen the apps. Hidden Bar provides a convenient way to declutter your Mac\u0026rsquo;s menu bar by hiding icons from apps that you don\u0026rsquo;t need to access frequently. It\u0026rsquo;s a handy tool for a cleaner and more organized desktop experience. ","date":"09-02-2022","objectID":"/posts/development/hide-menu-bar-icons-on-mac/:0:0","tags":null,"title":"Hide Menu Bar Icons On Mac","uri":"/posts/development/hide-menu-bar-icons-on-mac/#"},{"categories":["Development"],"collections":null,"content":"Collabora Online is an open-source office suite that can be integrated with Nextcloud to provide collaborative document editing features. In this guide, we will walk you through the process of setting up Collabora Online CODE on Nextcloud using Docker. Please note that this guide assumes you have Docker installed on your server and have Nextcloud already set up. If not, make sure to install Docker and set up Nextcloud before proceeding. ","date":"09-02-2022","objectID":"/posts/development/setting-up-collabora-online-code-on-nextcloud/:0:0","tags":null,"title":"Setting up Collabora Online CODE on Nextcloud","uri":"/posts/development/setting-up-collabora-online-code-on-nextcloud/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Run Collabora Online CODE Docker Container To get started, you need to run the Collabora Online CODE Docker container. Use the following command to do so: ```shell docker run -t -d -p 9980:9980 -e \u0026#34;extra_params=--o:ssl.enable=false\u0026#34; collabora/code This command will pull the Collabora Online CODE container from the Docker Hub and start it in detached mode. It also disables SSL for simplicity in this example. Make sure the container is up and running before moving on to the next step. ## Step 2: Configure Collabora Online Server in Nextcloud 1. Log in to your Nextcloud instance as an administrator. 2. Navigate to your user profile by clicking on your profile picture or name in the top-right corner and selecting \u0026#34;Settings.\u0026#34; 3. In the Settings menu, click on \u0026#34;Office\u0026#34; in the left sidebar. 4. Under the \u0026#34;Collabora Online\u0026#34; section, select \u0026#34;Use your own server.\u0026#34; 5. In the \u0026#34;URL (and Port) of Collabora Online-server\u0026#34; field, enter the URL of your Collabora Online server. Since you mentioned that you must use the IP address, use the following URL format:http://:9980/ Replace `\u0026lt;your-server-ip\u0026gt;` with the actual IP address of the server where you are running Collabora Online CODE Docker container. 6. Click \u0026#34;Save\u0026#34; to apply the changes. ## Step 3: Test Collabora Online Integration To test whether Collabora Online integration with Nextcloud is working correctly, follow these steps: 1. Create or open a document (e.g., a text document or a spreadsheet) within Nextcloud. 2. Click on the \u0026#34;Edit\u0026#34; button, which should now be available in the toolbar when viewing a supported document type. 3. The document will open in the Collabora Online editor, allowing you to collaborate in real-time with others. Congratulations! You have successfully set up Collabora Online CODE on Nextcloud using Docker. You can now enjoy collaborative document editing within your Nextcloud instance. Please note that this setup uses HTTP without SSL for simplicity. In a production environment, it\u0026#39;s highly recommended to configure SSL/TLS for secure communication between Nextcloud and Collabora Online.","date":"09-02-2022","objectID":"/posts/development/setting-up-collabora-online-code-on-nextcloud/:1:0","tags":null,"title":"Setting up Collabora Online CODE on Nextcloud","uri":"/posts/development/setting-up-collabora-online-code-on-nextcloud/#step-1-run-collabora-online-code-docker-container"},{"categories":["Development"],"collections":null,"content":"If you are encountering issues when upgrading your Go application from using the dgrijalva/jwt-go module to golang-jwt/jwt, particularly when dealing with http.Request objects, you might need to make some adjustments in your code. This guide provides a solution to this issue. ","date":"31-01-2022","objectID":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/:0:0","tags":null,"title":"Unable To Use Httprequest When Upgrading From Dgrijalvajwt Go To Golang Jwtjwt Go Module","uri":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/#"},{"categories":["Development"],"collections":null,"content":"Error in the Code You mentioned that you are facing an error in the following code: token, err := request.ParseFromRequest(c.Request, request.OAuth2Extractor, func(token *jwtgo.Token) (interface{}, error) { b := ([]byte(secret)) return b, nil }) The error you are encountering might be due to differences in how the golang-jwt/jwt module handles http.Request objects compared to the older dgrijalva/jwt-go module. ","date":"31-01-2022","objectID":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/:1:0","tags":null,"title":"Unable To Use Httprequest When Upgrading From Dgrijalvajwt Go To Golang Jwtjwt Go Module","uri":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/#error-in-the-code"},{"categories":["Development"],"collections":null,"content":"Solution To resolve the issue and make your code compatible with the golang-jwt/jwt module, you should consider upgrading to the v4 version of the module. Here\u0026rsquo;s how you can do it: Update your go.mod file to specify the v4 version of the golang-jwt/jwt module. You can do this by adding the following line: require github.com/golang-jwt/jwt/v4 v4.0.0 Make sure to run go get to update your dependencies after modifying the go.mod file. Adjust your code to use the golang-jwt/jwt/v4 package instead of the older version. Update your import statement like this: import ( \u0026#34;github.com/golang-jwt/jwt/v4\u0026#34; \u0026#34;github.com/golang-jwt/jwt/v4/request\u0026#34; ) Modify your code to use the http.Request object in a way that\u0026rsquo;s compatible with the golang-jwt/jwt/v4 module. Here\u0026rsquo;s an example of how your code might look: token, err := request.ParseFromRequest(c.Request, request.OAuth2Extractor, func(token *jwt.Token) (interface{}, error) { // Your code to extract the secret key goes here. // For example, you can use a constant secret key. secret := []byte(\u0026#34;your-secret-key\u0026#34;) return secret, nil }) Ensure that you correctly retrieve and provide the secret key as needed by your application. By following these steps and upgrading to the golang-jwt/jwt/v4 module, you should be able to resolve the http.Request compatibility issue and use the module seamlessly in your Go application. Please note that the specific implementation details may vary depending on your application\u0026rsquo;s requirements and how you handle secret keys. Be sure to adapt the code accordingly to fit your use case. ","date":"31-01-2022","objectID":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/:2:0","tags":null,"title":"Unable To Use Httprequest When Upgrading From Dgrijalvajwt Go To Golang Jwtjwt Go Module","uri":"/posts/development/unable-to-use-httprequest-when-upgrading-from-dgrijalvajwt-go-to-golang-jwtjwt-go-module/#solution"},{"categories":["Development"],"collections":null,"content":"To mirror a GitLab repository to GitHub or to mirror one Git repository to another Git repository, you can follow the steps outlined in the provided code snippet. Here\u0026rsquo;s a breakdown of the process: Clone the GitLab Repository as a Mirror: git clone --mirror git@your-gitlab-site.com:username/repo.git This command clones the GitLab repository with the --mirror option, which is similar to --bare but also copies all refs as-is. It\u0026rsquo;s useful for creating a full backup or moving the repository. Change into the Newly Created Repository Directory: cd repo Navigate to the directory created by the previous git clone command. Push to GitHub as a Mirror: git push --no-verify --mirror git@github.com:username/repo.git Push the mirrored repository to GitHub using the --no-verify option to skip any pre-push hooks. This command effectively mirrors your GitLab repository on GitHub. Set Push URL to the Mirror Location: git remote set-url --push origin git@github.com:username/repo.git This step sets the push URL for the origin remote to the GitHub repository. It ensures that future pushes will go to the correct GitHub location. Periodically Update the Repository on GitHub from GitLab: git fetch -p origin git push --no-verify --mirror Use these commands to periodically update the GitHub repository with changes from GitLab. The git fetch -p command prunes deleted references, and the subsequent git push --no-verify --mirror command pushes all the updated references to GitHub. Make sure to replace git@your-gitlab-site.com:username/repo.git with the actual GitLab repository URL and git@github.com:username/repo.git with the GitHub repository URL you want to mirror to. By following these steps, you can maintain an up-to-date mirror of your GitLab repository on GitHub or mirror one Git repository to another Git repository. ","date":"30-01-2022","objectID":"/posts/development/how-mirror-gitlab-to-github-or-git-to-git/:0:0","tags":null,"title":"How Mirror Gitlab to Github or GIT to GIT","uri":"/posts/development/how-mirror-gitlab-to-github-or-git-to-git/#"},{"categories":["Development"],"collections":null,"content":"Opening Ports from Localhost to WSL in Windows using WSLHostPatcher When working with Windows Subsystem for Linux (WSL), you may encounter situations where you need to open a port from your localhost to a port in your WSL instance. This can be useful for various tasks, such as running web servers or accessing services within your WSL environment. One way to achieve this is by using a tool called WSLHostPatcher. WSLHostPatcher is a helpful utility that simplifies the process of port forwarding between Windows and WSL, making it easier to access services running in your Linux environment from your Windows host. In this article, we\u0026rsquo;ll guide you through the steps to use WSLHostPatcher to open a port from localhost to your WSL instance. ","date":"25-01-2022","objectID":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/:1:0","tags":null,"title":"Cant Open Port From Localhost To WSL Port On Windows","uri":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/#opening-ports-from-localhost-to-wsl-in-windows-using-wslhostpatcher"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have the following prerequisites in place: WSL Installed: Make sure you have Windows Subsystem for Linux installed on your Windows machine. You can install it by following the official Microsoft documentation. WSL 2: It is recommended to use WSL 2 for better performance and compatibility. WSL Distribution: Have a Linux distribution installed within your WSL environment. You can choose from various distributions available on the Microsoft Store or from other sources. ","date":"25-01-2022","objectID":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/:1:1","tags":null,"title":"Cant Open Port From Localhost To WSL Port On Windows","uri":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Installing WSLHostPatcher To get started, follow these steps to install WSLHostPatcher: Download WSLHostPatcher: You can download the latest release of WSLHostPatcher from its GitHub repository: WSLHostPatcher. Extract the ZIP File: After downloading, extract the contents of the ZIP file to a directory of your choice. Run as Administrator: Right-click on the WSLHostPatcher.exe file and select \u0026ldquo;Run as administrator\u0026rdquo; to ensure it has the necessary permissions. ","date":"25-01-2022","objectID":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/:1:2","tags":null,"title":"Cant Open Port From Localhost To WSL Port On Windows","uri":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/#installing-wslhostpatcher"},{"categories":["Development"],"collections":null,"content":"Using WSLHostPatcher to Open Ports Now that you have WSLHostPatcher installed, follow these steps to open a port from localhost to your WSL instance: Launch WSL: Open your WSL distribution by running wsl from the command prompt or PowerShell. Find the IP Address: Inside your WSL instance, you need to find the IP address it\u0026rsquo;s using. You can do this by running the following command within your WSL terminal: ip addr show eth0 | grep inet | awk \u0026#39;{ print $2; }\u0026#39; | sed \u0026#39;s/\\/.*$//\u0026#39; This command will display the IP address of your WSL instance. Make note of it. Run WSLHostPatcher: In your Windows environment, run WSLHostPatcher.exe as an administrator. It will open a graphical user interface (GUI). Add a New Port Mapping: Click the \u0026ldquo;Add New Mapping\u0026rdquo; button in the WSLHostPatcher GUI. Enter the following information: Name: A descriptive name for your port mapping. Windows Port: The port on your Windows localhost that you want to open. WSL IP Address: The IP address you obtained from step 2. WSL Port: The port you want to forward from Windows to WSL. Apply the Mapping: Click the \u0026ldquo;Apply\u0026rdquo; button to save the port mapping. Start the Service: You can start or stop the port forwarding service as needed using the buttons in the WSLHostPatcher GUI. ","date":"25-01-2022","objectID":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/:1:3","tags":null,"title":"Cant Open Port From Localhost To WSL Port On Windows","uri":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/#using-wslhostpatcher-to-open-ports"},{"categories":["Development"],"collections":null,"content":"Testing the Port Forwarding To test whether the port forwarding is working correctly, you can open a web browser or use a tool like curl from your Windows environment to access the service running within your WSL instance using the localhost address and the Windows port you specified. For example, if you forwarded port 80, you can access a web server running in your WSL instance by opening a browser and entering http://localhost:80. With WSLHostPatcher, you can easily manage port forwarding between your Windows host and your WSL instance, making it convenient to access services and applications running in your Linux environment from your Windows machine. Remember to keep the tool up-to-date by checking the official GitHub repository for any updates or improvements. Please note that port forwarding can pose security risks, so use it responsibly and only for trusted purposes. ","date":"25-01-2022","objectID":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/:1:4","tags":null,"title":"Cant Open Port From Localhost To WSL Port On Windows","uri":"/posts/development/cant-open-port-from-localhost-to-wsl-port-on-windows/#testing-the-port-forwarding"},{"categories":["Development"],"collections":null,"content":"In Bootstrap 5, when you try to add data-bs-toggle and data-bs-target attributes to a link (\u0026lt;a\u0026gt; element) within a navigation bar (\u0026lt;nav\u0026gt; element), you might encounter an issue where the link doesn\u0026rsquo;t work as expected. This can be frustrating, but it can be resolved with a few adjustments to your code. The data-bs-toggle and data-bs-target attributes are typically used for toggling Bootstrap components like dropdown menus and collapsible elements. When you add these attributes to a link, they can interfere with the link\u0026rsquo;s default behavior, preventing it from functioning correctly. To maintain the link\u0026rsquo;s functionality while also using data-bs-toggle and data-bs-target, you should consider the following approach: \u0026lt;nav class=\u0026#34;navbar navbar-expand-lg\u0026#34;\u0026gt; \u0026lt;a class=\u0026#34;nav-link\u0026#34; href=\u0026#34;#home\u0026#34;\u0026gt; Home \u0026lt;/a\u0026gt; \u0026lt;button class=\u0026#34;navbar-toggler\u0026#34; type=\u0026#34;button\u0026#34; data-bs-toggle=\u0026#34;collapse\u0026#34; data-bs-target=\u0026#34;.navbar-collapse.show\u0026#34;\u0026gt; Toggle Navbar \u0026lt;/button\u0026gt; \u0026lt;!-- Additional navbar contents go here --\u0026gt; \u0026lt;/nav\u0026gt; In this modified code: The link (\u0026lt;a\u0026gt;) is kept simple without any additional attributes that might interfere with its default behavior. The toggle button for the navigation bar is provided separately as a \u0026lt;button\u0026gt; element with the data-bs-toggle and data-bs-target attributes. This button is responsible for showing and hiding the navigation bar when clicked. By separating the link and the toggle button, you ensure that the link works as expected, and the toggle button controls the visibility of the navigation bar. This approach allows you to maintain both the link functionality and the collapsible navigation bar feature provided by Bootstrap 5. Remember to adjust the code according to your specific layout and styling requirements within your navigation bar. ","date":"23-01-2022","objectID":"/posts/development/adding-data-bs-toggle-and-data-bs-target-breaks-link-in-bootstrap-5/:0:0","tags":null,"title":"Adding `data-bs-toggle` and `data-bs-target` Breaks Link in Bootstrap 5","uri":"/posts/development/adding-data-bs-toggle-and-data-bs-target-breaks-link-in-bootstrap-5/#"},{"categories":["Development"],"collections":null,"content":"Introduction Postman is a popular API testing tool that allows developers to automate and validate API responses efficiently. This guide provides examples of how to use variables, write tests, handle error responses, and perform asynchronous requests in Postman. ","date":"21-01-2022","objectID":"/posts/development/postman-example/:1:0","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#introduction"},{"categories":["Development"],"collections":null,"content":"Using Variables in Postman Variables in Postman help create dynamic values for API testing. The following example sets two variables: name and description using random data. pm.variables.set(\u0026#39;name\u0026#39;, pm.variables.replaceIn(\u0026#39;{{$randomLoremSlug}}\u0026#39;)) pm.variables.set(\u0026#39;description\u0026#39;, pm.variables.replaceIn(\u0026#39;{{$randomLoremSentence}}\u0026#39;)) ","date":"21-01-2022","objectID":"/posts/development/postman-example/:2:0","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#using-variables-in-postman"},{"categories":["Development"],"collections":null,"content":"Writing Tests in Postman Tests in Postman are JavaScript snippets that validate API responses. The following sections outline different test cases. ","date":"21-01-2022","objectID":"/posts/development/postman-example/:3:0","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#writing-tests-in-postman"},{"categories":["Development"],"collections":null,"content":"OK Response Tests pm.test(\u0026#34;Status code is 200\u0026#34;, () =\u0026gt; { pm.response.to.have.status(200) }) pm.test(\u0026#34;Status code is not 200\u0026#34;, () =\u0026gt; { pm.response.to.not.eql(200) }) pm.test(\u0026#39;Response body is empty\u0026#39;, () =\u0026gt; { pm.response.to.have.body(\u0026#39;\u0026#39;) }) pm.test(\u0026#39;Status code is 404\u0026#39;, () =\u0026gt; { pm.expect(res.code).to.eql(404) }) pm.test(\u0026#34;Has name property\u0026#34;, function () { pm.expect(jsonData).to.have.property(\u0026#39;name\u0026#39;, pm.variables.get(\u0026#39;update_name\u0026#39;)) }) pm.expect(jsonData).to.be.an(\u0026#39;array\u0026#39;) jsonData.forEach(link =\u0026gt; { pm.expect(link).to.have.keys(\u0026#39;id\u0026#39;, \u0026#39;info_id\u0026#39;, \u0026#39;name\u0026#39;, \u0026#39;url\u0026#39;).and.be.an(\u0026#39;object\u0026#39;) }) pm.expect(jsonData[0]).to.have.property(\u0026#39;name\u0026#39;, pm.variables.get(\u0026#39;new_link_name\u0026#39;)) ","date":"21-01-2022","objectID":"/posts/development/postman-example/:3:1","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#ok-response-tests"},{"categories":["Development"],"collections":null,"content":"Error Response Tests pm.test(\u0026#34;Status code is 400\u0026#34;, () =\u0026gt; { pm.response.to.have.status(400) }) let jsonData = pm.response.json() pm.test(\u0026#34;Has error property\u0026#34;, function () { pm.expect(jsonData).to.have.property(\u0026#39;error\u0026#39;) }) let jsonErrorData = jsonData.error pm.test(\u0026#34;Has status property\u0026#34;, function () { pm.expect(jsonErrorData).to.have.property(\u0026#39;status\u0026#39;, 400) }) pm.test(\u0026#34;Has name property\u0026#34;, function () { pm.expect(jsonErrorData).to.have.property(\u0026#39;name\u0026#39;, \u0026#34;ApplicationError\u0026#34;) }) pm.test(\u0026#34;Has message property\u0026#34;, function () { pm.expect(jsonErrorData).to.have.property(\u0026#39;message\u0026#39;, \u0026#34;Name exist\u0026#34;) }) ","date":"21-01-2022","objectID":"/posts/development/postman-example/:3:2","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#error-response-tests"},{"categories":["Development"],"collections":null,"content":"Performing Asynchronous Requests Postman supports asynchronous requests using callback functions. The following example demonstrates an async request for logging in and finding a user by email. const sendLoginRequest = (cb) =\u0026gt; { const loginOptions = { url: baseUrl + \u0026#39;/auth/login\u0026#39;, method: \u0026#39;POST\u0026#39;, header: { \u0026#39;content-type\u0026#39;: \u0026#39;application/json\u0026#39; }, body: { mode: \u0026#39;raw\u0026#39;, raw: JSON.stringify({ \u0026#39;email\u0026#39;: pm.variables.get(\u0026#39;email\u0026#39;), \u0026#39;password\u0026#39;: pm.variables.get(\u0026#39;password\u0026#39;).toString(), }) } } pm.sendRequest(loginOptions, (err, res) =\u0026gt; { data = res.json() cb(err, res) }) } asyncSeries([ (cb, res) =\u0026gt; sendLoginRequest(cb, res), (cb, res) =\u0026gt; sendFindUserByEmailRequest(cb, res), ]) ","date":"21-01-2022","objectID":"/posts/development/postman-example/:4:0","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#performing-asynchronous-requests"},{"categories":["Development"],"collections":null,"content":"Conclusion Postman provides a powerful framework for API testing, allowing developers to automate requests, validate responses, and handle errors effectively. By leveraging variables, test scripts, and asynchronous functions, developers can ensure API reliability and performance. ","date":"21-01-2022","objectID":"/posts/development/postman-example/:5:0","tags":["postman","json"],"title":"Postman Example","uri":"/posts/development/postman-example/#conclusion"},{"categories":["Development"],"collections":null,"content":"You want to implement a collapsible navbar in a React application using Bootstrap 5, and you\u0026rsquo;re encountering an issue where the navbar doesn\u0026rsquo;t collapse when a navigation link is clicked. This issue can occur due to various reasons, and I\u0026rsquo;ll provide some steps to help you troubleshoot and potentially fix the problem. ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:0:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#"},{"categories":["Development"],"collections":null,"content":"1. Check Bootstrap Installation Ensure that you have Bootstrap 5 properly installed and included in your project. You should have Bootstrap CSS and JavaScript files included in your HTML or imported in your JavaScript file. \u0026lt;!-- Example: Include Bootstrap CSS in your HTML --\u0026gt; \u0026lt;link href=\u0026#34;https://cdn.jsdelivr.net/npm/bootstrap@5.5.0/dist/css/bootstrap.min.css\u0026#34; rel=\u0026#34;stylesheet\u0026#34;\u0026gt; \u0026lt;!-- Include Bootstrap JavaScript in your HTML --\u0026gt; \u0026lt;script src=\u0026#34;https://cdn.jsdelivr.net/npm/bootstrap@5.5.0/dist/js/bootstrap.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:1:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#1-check-bootstrap-installation"},{"categories":["Development"],"collections":null,"content":"2. Verify Navbar Structure Check if your Navbar structure is correctly set up with the appropriate classes. In Bootstrap 5, the navbar should have specific classes like navbar, navbar-expand-lg, and navbar-light to make it collapsible. \u0026lt;nav className=\u0026#34;navbar navbar-expand-lg navbar-light bg-light\u0026#34;\u0026gt; {/* Navbar content here */} \u0026lt;/nav\u0026gt; ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:2:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#2-verify-navbar-structure"},{"categories":["Development"],"collections":null,"content":"3. Ensure React Router Setup Make sure that React Router is correctly set up in your application. Ensure that the useLocation hook and routing components are used correctly. ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:3:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#3-ensure-react-router-setup"},{"categories":["Development"],"collections":null,"content":"4. Check Event Handlers In your provided code, you have defined handleNavCollapse and handleToggleNavCollapse functions. Ensure that these functions are working as expected. You can add console.log statements inside them to see if they are being called when the button and links are clicked. const handleNavCollapse = () =\u0026gt; { console.log(\u0026#39;Nav link clicked\u0026#39;); setNavCollapsed(true); } const handleToggleNavCollapse = () =\u0026gt; { console.log(\u0026#39;Toggle button clicked\u0026#39;); setNavCollapsed(!navCollapsed); } ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:4:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#4-check-event-handlers"},{"categories":["Development"],"collections":null,"content":"5. Confirm navCollapsed State Check the state variable navCollapsed to see if it is properly changing its value when the toggle button is clicked. Ensure that this state variable is being used to conditionally render the collapsed or expanded navbar. ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:5:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#5-confirm-navcollapsed-state"},{"categories":["Development"],"collections":null,"content":"6. Debug in Browser Developer Tools Use your browser\u0026rsquo;s developer tools (e.g., Chrome DevTools) to inspect the elements and console for any errors or unexpected behavior. This can help you identify any issues with CSS classes, event handlers, or JavaScript errors. ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:6:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#6-debug-in-browser-developer-tools"},{"categories":["Development"],"collections":null,"content":"7. Isolate the Issue If the problem persists, try to isolate the issue by creating a minimal example. Create a new React component with a simplified Navbar and routing setup to see if the problem still occurs. This can help identify if the issue is specific to this component or a more general problem in your application. By following these steps and debugging your code, you should be able to identify and resolve the issue with your Bootstrap 5 collapsible navbar in your React application. ","date":"19-01-2022","objectID":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/:7:0","tags":null,"title":"Bootstrap 5 Wont Collapse The Navbar When Nav Link Clicked On React","uri":"/posts/development/bootstrap-5-wont-collapse-the-navbar-when-nav-link-clicked-on-react/#7-isolate-the-issue"},{"categories":["Development"],"collections":null,"content":"It appears that you are trying to use React Router to navigate to another component and pass state information to that component. However, you\u0026rsquo;ve noticed that the target component is not refreshing when you navigate to it. This behavior is actually by design in React Router. When you navigate to a route using \u0026lt;Link\u0026gt; or navigate, React Router typically does not unmount and remount the component by default. Instead, it updates the URL and re-renders the current component with the new route\u0026rsquo;s props and state. If you want the target component to refresh when navigating, you can achieve this by using a combination of the key prop and a state management library like Redux or React\u0026rsquo;s built-in context and state management. Here\u0026rsquo;s how you can do it: Create a Redux Store (Optional): If you\u0026rsquo;re not already using Redux or another state management library, you can set up Redux to manage your application\u0026rsquo;s state. If you prefer not to use Redux, you can use React\u0026rsquo;s built-in context and state management, but Redux provides a more structured way to manage global state. Dispatch an Action When Navigating: When you navigate, dispatch an action that updates the Redux store or the context state. For example: // Redux example import { useDispatch } from \u0026#39;react-redux\u0026#39;; import { updateState } from \u0026#39;./actions\u0026#39;; // Inside your component const dispatch = useDispatch(); const handleNavigation = () =\u0026gt; { dispatch(updateState({ from: \u0026#39;Login\u0026#39; })); navigate(\u0026#39;/\u0026#39;); }; Use the key Prop: In your target component, use the key prop to force it to remount when the state changes. You can set the key prop to a value that changes when the state updates. For example: // Inside your target component const location = useLocation(); const key = location.state.from; // Use a value that changes when the state changes return ( \u0026lt;div key={key}\u0026gt; {/* Your component content */} \u0026lt;/div\u0026gt; ); By setting a different key value when the state changes, React will treat the component as a new instance and remount it, causing it to refresh with the updated state. This approach allows you to refresh the target component when navigating and passing state information to it. Depending on your project setup and preferences, you can choose between Redux or React\u0026rsquo;s built-in context and state management for managing the state that triggers the component refresh. ","date":"19-01-2022","objectID":"/posts/development/react-link-to-wont-refresh-target-component/:0:0","tags":null,"title":"React Link To Wont Refresh Target Component","uri":"/posts/development/react-link-to-wont-refresh-target-component/#"},{"categories":["Development"],"collections":null,"content":"If you want to beautify or format a script in Postman, you can follow these simple steps to make your code more organized and readable. Beautifying or formatting your script can help improve code maintainability and collaboration with team members. Open Postman: First, make sure you have Postman installed on your computer. If you don\u0026rsquo;t have it, you can download it from the official Postman website. Create or Open a Request: Open Postman and create a new request or open an existing one that contains the script you want to beautify/format. Navigate to the Script Section: In the Postman request, click on the \u0026ldquo;Tests\u0026rdquo; tab to access the script section where your code is located. This is where you will beautify your script. Select the Script: Click inside the script editor to place your cursor within the script code. Beautify/Format the Script: Option 1: Using Keyboard Shortcut (Windows/Linux): Press CTRL + B on your keyboard. This keyboard shortcut will automatically format your script to make it more readable. Postman will apply consistent indentation and line breaks to organize your code. Option 2: Using Menu Options (Windows/Linux/Mac): If the keyboard shortcut doesn\u0026rsquo;t work or you prefer using menus: Click on the \u0026ldquo;Code\u0026rdquo; menu in the script editor. Select the \u0026ldquo;Reformat Code\u0026rdquo; option. This will achieve the same result as the keyboard shortcut, formatting your script for better readability. Review the Beautified Script: After applying the formatting, take a moment to review the script. Ensure that it\u0026rsquo;s now well-organized with proper indentation and line breaks. This makes it easier to read and understand. Save Your Changes: If you\u0026rsquo;re satisfied with the beautified script, remember to save your changes by clicking the \u0026ldquo;Save\u0026rdquo; button in the Postman request editor. By following these steps, you can easily beautify or format your scripts within Postman, making your code more visually appealing and easier to work with. This is particularly useful when dealing with complex scripts or collaborating with team members on API testing and automation projects. ","date":"17-01-2022","objectID":"/posts/development/how-to-beautify-format-script-on-postman/:0:0","tags":null,"title":"How to Beautify / Format Script on Postman","uri":"/posts/development/how-to-beautify-format-script-on-postman/#"},{"categories":["Development"],"collections":null,"content":"If you are experiencing issues where Chrome bookmarks are creating duplicates when syncing with multiple Windows machines, especially when both machines have iCloud extensions enabled, you are not alone. This can be a common problem for users who rely on iCloud to keep their bookmarks in sync across devices. However, there is a solution to this issue. ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:0:0","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#"},{"categories":["Development"],"collections":null,"content":"Problem Description The problem occurs when you have two or more Windows machines running Google Chrome, and both machines have the iCloud extension enabled. When you create a new bookmark on one machine, it may get duplicated on the other machines, resulting in cluttered and redundant bookmarks. ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:1:0","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#problem-description"},{"categories":["Development"],"collections":null,"content":"Solution To resolve this issue and prevent the creation of duplicate bookmarks, follow these steps: ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:2:0","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#solution"},{"categories":["Development"],"collections":null,"content":"1. Disable Extensions Sync Google Chrome allows you to sync your extensions across devices. However, in some cases, this can lead to conflicts and duplication of bookmarks. To disable extensions sync, follow these steps: Open Google Chrome on one of your Windows machines. Click on the three dots (ellipsis) in the upper-right corner of the browser window to open the menu. Go to \u0026ldquo;Settings.\u0026rdquo; In the left sidebar, click on \u0026ldquo;Sync and Google services.\u0026rdquo; Under the \u0026ldquo;Sync\u0026rdquo; section, find \u0026ldquo;Sync extensions\u0026rdquo; and toggle it off. ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:2:1","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#1-disable-extensions-sync"},{"categories":["Development"],"collections":null,"content":"2. Disable iCloud Extension on One of the Machines Since the issue seems to arise when both Windows machines have the iCloud extension enabled, you can prevent duplicate bookmarks by disabling the iCloud extension on one of the machines. Follow these steps: Open Google Chrome on one of your Windows machines. Click on the three dots (ellipsis) in the upper-right corner of the browser window to open the menu. Go to \u0026ldquo;Extensions.\u0026rdquo; Locate the iCloud extension and click on the toggle switch to disable it. By disabling the iCloud extension on one of the machines, you should prevent the duplication of bookmarks when syncing with iCloud. ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:2:2","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#2-disable-icloud-extension-on-one-of-the-machines"},{"categories":["Development"],"collections":null,"content":"Conclusion Syncing bookmarks across multiple Windows machines using Google Chrome and iCloud extensions can sometimes lead to the creation of duplicate items. To resolve this issue, you can disable extensions sync in Chrome settings and disable the iCloud extension on one of the machines. This will help ensure that your bookmarks stay organized and free from duplicates. ","date":"15-01-2022","objectID":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/:3:0","tags":null,"title":"Chrome Bookmarks Create Duplicate Item When Sync With Two Windows…","uri":"/posts/development/chrome-bookmarks-create-duplicate-items-when-syncing-with-multiple-windows-machines-with-icloud-extensions-enabled/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"In today\u0026rsquo;s digital landscape, having a strong online presence is crucial for businesses of all sizes. With the increasing competition on social media platforms and search engines, it\u0026rsquo;s essential to have a solid understanding of how to optimize your online presence to reach your target audience effectively. In this article, we\u0026rsquo;ll delve into the world of Search Engine Optimization (SEO), Online Ads/Search Engine Marketing (SEM), Social Media Engagement (SME), and Social Media Management. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:0:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#"},{"categories":["Productivity"],"collections":null,"content":"Search Engine Optimization (SEO) To start, let\u0026rsquo;s talk about SEO. This process involves optimizing your website to rank higher in search engine results pages (SERPs) for specific keywords and phrases. By doing so, you\u0026rsquo;ll increase the chances of your website being found by potential customers who are searching for products or services like yours. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:1:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#search-engine-optimization-seo"},{"categories":["Productivity"],"collections":null,"content":"The Three Main Components of SEO Keyword Research: Identifying relevant keywords and phrases that your target audience is using to search for products or services like yours. Page Optimization: Optimizing individual web pages to include targeted keywords, meta tags, and header tags. Blog Optimization: Creating high-quality blog content that includes targeted keywords and phrases to attract organic traffic. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:1:1","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#the-three-main-components-of-seo"},{"categories":["Productivity"],"collections":null,"content":"Online Ads/Search Engine Marketing (SEM) Next, let\u0026rsquo;s explore SEM, which involves using paid advertising platforms like Google AdWords, Facebook Ads, Instagram Ads, YouTube Ads, Twitter Ads, LinkedIn Ads, and TikTok Ads to reach your target audience. By targeting specific keywords and demographics, you can increase brand awareness, drive website traffic, and generate leads. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:2:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#online-adssearch-engine-marketing-sem"},{"categories":["Productivity"],"collections":null,"content":"Popular SEM Platforms Google Adwords: A pay-per-click (PPC) advertising platform that allows you to bid on specific keywords and target specific audiences. Facebook Ads: A social media advertising platform that allows you to target specific demographics, interests, and behaviors. Instagram Ads: A visual-based advertising platform that allows you to target specific audiences with high-quality images and videos. YouTube Ads: A video-based advertising platform that allows you to target specific audiences with video ads. Twitter Ads: A social media advertising platform that allows you to target specific audiences with short-form text- based ads. LinkedIn Ads: A professional networking site that allows you to target specific professionals and businesses with sponsored content and display ads. TikTok Ads: A social media advertising platform that allows you to target specific audiences with short-form video ads. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:2:1","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#popular-sem-platforms"},{"categories":["Productivity"],"collections":null,"content":"Social Media Engagement (SME) In addition to SEO and SEM, it\u0026rsquo;s also essential to focus on Social Media Engagement (SME). This involves creating engaging content and interacting with your audience across various social media platforms like Instagram, Facebook, YouTube, Twitter, and LinkedIn. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:3:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#social-media-engagement-sme"},{"categories":["Productivity"],"collections":null,"content":"Important SME Strategies Post Frequency: Posting regular updates to keep your audience engaged. Content Creation: Creating high-quality, relevant, and valuable content that resonates with your audience. Instagram Ads Campaign: Running targeted ads campaigns on Instagram to reach new audiences and drive conversions. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:3:1","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#important-sme-strategies"},{"categories":["Productivity"],"collections":null,"content":"Social Media Management Finally, let\u0026rsquo;s talk about Social Media Management. This involves creating a comprehensive social media strategy that includes content creation, posting schedules, engagement tactics, and analytics reporting. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:4:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#social-media-management"},{"categories":["Productivity"],"collections":null,"content":"Important Social Media Management Strategies Post Scheduling: Scheduling posts in advance to ensure consistent posting. Content Creation: Creating high-quality content that resonates with your audience. Instagram Ads Campaign: Running targeted ads campaigns on Instagram to reach new audiences and drive conversions. Likes and Post Engagement: Encouraging likes, comments, and shares by creating engaging content and interacting with your audience. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:4:1","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#important-social-media-management-strategies"},{"categories":["Productivity"],"collections":null,"content":"Reporting and Analytics To measure the success of your online marketing efforts, it\u0026rsquo;s essential to track key performance indicators (KPIs) like website traffic, conversion rates, and return on investment (ROI). Some of the most popular reporting tools include: ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:5:0","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#reporting-and-analytics"},{"categories":["Productivity"],"collections":null,"content":"Popular Reporting Tools Google Analytics: A web analytics platform that provides insights into website traffic and behavior. Google PageSpeed Insight: A tool that analyzes page speed and provides recommendations for improvement. Google Business: A platform that allows you to manage your online presence across various Google platforms. Google Search: A search engine that allows users to find information online. In conclusion, having a strong online presence requires a comprehensive approach that includes SEO, SEM, SME, and social media management. By optimizing your website for search engines, running targeted ads campaigns, engaging with your audience on social media, and tracking key performance indicators, you\u0026rsquo;ll be well on your way to unlocking online success. ","date":"14-01-2022","objectID":"/posts/productivity/guide-to-digital-marketing/:5:1","tags":null,"title":"Guide to Digital Marketing","uri":"/posts/productivity/guide-to-digital-marketing/#popular-reporting-tools"},{"categories":["Development"],"collections":null,"content":"You want to fix issues with ESLint configurations when working with Jest test files in a Create React App project within Visual Studio Code. This problem can occur due to the way ESLint and Jest interact in some setups. To resolve this, you can follow these steps: Install ESLint Jest Plugin: First, make sure you have the ESLint Jest plugin installed in your project. You can install it using npm or yarn: npm install eslint-plugin-jest --save-dev # or yarn add eslint-plugin-jest --dev Update ESLint Configuration: Update your ESLint configuration to include the \u0026ldquo;jest\u0026rdquo; environment. You can do this in your package.json as you\u0026rsquo;ve shown or in your .eslintrc file if you have one. Here\u0026rsquo;s an example of how to add it to your package.json: Or in a .eslintrc.js file: module.exports = { env: { jest: true, }, plugins: [\u0026#34;jest\u0026#34;], // ... other ESLint configurations }; Configure Jest Environment in VS Code: Sometimes, Visual Studio Code doesn\u0026rsquo;t automatically pick up the environment set in your ESLint configuration. To ensure that VS Code recognizes the Jest environment, you can add a configuration in your project\u0026rsquo;s .vscode/settings.json file: Make sure to restart VS Code after adding or modifying this file to ensure the changes take effect. Install VS Code Extensions: Ensure you have the ESLint and Jest extensions installed in Visual Studio Code. You can search for these extensions in the VS Code marketplace and install them. Reload Window: After making these changes, reload the VS Code window to ensure that the new ESLint and Jest configurations are applied. With these steps, you should be able to work with Jest test files in your Create React App project without ESLint errors in Visual Studio Code. Remember to adjust the configuration files according to your project\u0026rsquo;s needs if you have custom ESLint or Jest configurations. ","date":"14-01-2022","objectID":"/posts/development/standardjs-jest-test-files-show-errors-on-vscode-with-create-create-app/:0:0","tags":null,"title":"Standardjs Jest Test Files Show Errors On Vscode With Create Create App","uri":"/posts/development/standardjs-jest-test-files-show-errors-on-vscode-with-create-create-app/#"},{"categories":["Development"],"collections":null,"content":"In React, you can conditionally add an \u0026ldquo;active\u0026rdquo; class to an element based on certain variables using conditional rendering. In your code snippet, it appears that you want to add the \u0026ldquo;active\u0026rdquo; class to a list-group-item element when the invoiceId is equal to invoice.number. Here\u0026rsquo;s how you can achieve this in React: import React from \u0026#39;react\u0026#39;; function YourComponent({ invoiceId, invoice }) { return ( \u0026lt;div\u0026gt; {/* ... Other JSX code ... */} \u0026lt;div className={`list-group-item list-group-item-action ${invoiceId === invoice.number ? \u0026#39;active\u0026#39; : \u0026#39;\u0026#39;}`}\u0026gt; {/* Your content for the list item */} \u0026lt;/div\u0026gt; {/* ... Other JSX code ... */} \u0026lt;/div\u0026gt; ); } export default YourComponent; In the code above: We assume you have a component called YourComponent that takes invoiceId and invoice as props. Inside the return statement, we use a template literal to create the className dynamically for the list-group-item element. We use a conditional (ternary) operator to check if invoiceId is equal to invoice.number. If they are equal, the \u0026lsquo;active\u0026rsquo; class is added; otherwise, an empty string is added. This approach ensures that the \u0026ldquo;active\u0026rdquo; class is only added to the list-group-item when the condition invoiceId === invoice.number is met. If the condition is not met, no additional classes will be added. Make sure to replace YourComponent with the actual name of your component and ensure that invoiceId and invoice.number are correctly passed as props to your component. ","date":"09-01-2022","objectID":"/posts/development/add-active-class-when-specific-variables-equal-on-react/:0:0","tags":null,"title":"Add Active Class When Specific Variables Equal On React","uri":"/posts/development/add-active-class-when-specific-variables-equal-on-react/#"},{"categories":["Development"],"collections":null,"content":"In this tutorial, we will walk you through the process of bundling Bootstrap 5 with Webpack, a popular JavaScript module bundler. By bundling Bootstrap with Webpack, you can efficiently manage and optimize your project\u0026rsquo;s JavaScript and CSS assets. Before we begin, make sure you have Node.js and npm (Node Package Manager) installed on your system. You will also need a basic understanding of using npm and Webpack in your project. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:0:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a New Project If you haven\u0026rsquo;t already, create a new project directory and navigate to it in your terminal. You can do this with the following commands: mkdir my-bootstrap-webpack-project cd my-bootstrap-webpack-project ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:1:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-1-create-a-new-project"},{"categories":["Development"],"collections":null,"content":"Step 2: Initialize npm To manage project dependencies, we\u0026rsquo;ll use npm. Initialize your project by running: npm init -y This command will create a package.json file with default settings. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:2:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-2-initialize-npm"},{"categories":["Development"],"collections":null,"content":"Step 3: Install Bootstrap Next, you\u0026rsquo;ll need to install Bootstrap as a project dependency. We\u0026rsquo;ll also install bootstrap-icons for additional icons. Run the following commands: npm install bootstrap bootstrap-icons ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:3:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-3-install-bootstrap"},{"categories":["Development"],"collections":null,"content":"Step 4: Create Your Webpack Configuration Create a webpack.config.js file in your project root directory if you haven\u0026rsquo;t already. This file will define your Webpack configuration. Here\u0026rsquo;s a basic example: const path = require(\u0026#39;path\u0026#39;); module.exports = { entry: \u0026#39;./src/js/main.js\u0026#39;, // Your main JavaScript file output: { filename: \u0026#39;bundle.js\u0026#39;, // The output bundle file name path: path.resolve(__dirname, \u0026#39;dist\u0026#39;, \u0026#39;assets\u0026#39;, \u0026#39;js\u0026#39;), // Output directory }, module: { rules: [ // Add rules for JavaScript files here, if needed ], }, }; This is a minimal Webpack configuration. You can add more loaders and plugins based on your project\u0026rsquo;s requirements, such as Babel for transpilation. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:4:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-4-create-your-webpack-configuration"},{"categories":["Development"],"collections":null,"content":"Step 5: Import Bootstrap in Your JavaScript In your main.js or the entry point of your JavaScript application, import Bootstrap and any Bootstrap components you need. For example: import \u0026#39;bootstrap/dist/css/bootstrap.min.css\u0026#39;; // Import Bootstrap CSS import \u0026#39;bootstrap\u0026#39;; // Import Bootstrap JavaScript import \u0026#39;bootstrap-icons/font/bootstrap-icons.css\u0026#39;; // Import Bootstrap Icons CSS Make sure to adjust the paths according to the location of your node_modules folder. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:5:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-5-import-bootstrap-in-your-javascript"},{"categories":["Development"],"collections":null,"content":"Step 6: Build Your Project Now, you can build your project using Webpack. Run the following command: npx webpack This command will bundle your JavaScript and assets according to the configuration in your webpack.config.js. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:6:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-6-build-your-project"},{"categories":["Development"],"collections":null,"content":"Step 7: Include the Bundle in Your HTML In your HTML file (e.g., index.html), include the generated bundle file: \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;!-- Your other head elements --\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;!-- Your HTML content --\u0026gt; \u0026lt;script src=\u0026#34;dist/assets/js/bundle.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Make sure to adjust the src attribute to match the path to your generated bundle file. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:7:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-7-include-the-bundle-in-your-html"},{"categories":["Development"],"collections":null,"content":"Step 8: Run Your Application You\u0026rsquo;re all set! You can now run your application using your preferred development server or by opening the HTML file in your browser. That\u0026rsquo;s it! You\u0026rsquo;ve successfully bundled Bootstrap 5 with Webpack in your project. You can expand on this foundation by adding more webpack loaders and plugins as needed for your project\u0026rsquo;s specific requirements. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/:8:0","tags":null,"title":"How to Bundle Bootstrap 5 with Webpack","uri":"/posts/development/how-to-bundle-bootstrap-5-with-webpack/#step-8-run-your-application"},{"categories":["Development"],"collections":null,"content":"To bundle Bootstrap Icons that have Sass and font resources with Webpack in production mode, you\u0026rsquo;ll need to make a few modifications to your Webpack configuration. Here\u0026rsquo;s a step-by-step guide on how to achieve this: Install Dependencies: First, make sure you have the necessary dependencies installed. You\u0026rsquo;ll need sass-loader, css-loader, style-loader, postcss-loader, and resolve-url-loader to handle Sass and CSS files. You may also need the file-loader or url-loader for fonts and other assets. You can install these dependencies using npm or yarn: npm install sass-loader css-loader style-loader postcss-loader resolve-url-loader file-loader --save-dev Configure Webpack: Update your Webpack configuration to include the necessary loaders and plugins for processing Sass and font resources. Here\u0026rsquo;s an updated webpack.config.js: const path = require(\u0026#39;path\u0026#39;); const HtmlWebpackPlugin = require(\u0026#39;html-webpack-plugin\u0026#39;); const MiniCssExtractPlugin = require(\u0026#39;mini-css-extract-plugin\u0026#39;); module.exports = { entry: \u0026#39;./web/src/js/main.js\u0026#39;, output: { filename: \u0026#39;bundle.js\u0026#39;, path: path.resolve(__dirname, \u0026#39;web\u0026#39;, \u0026#39;static\u0026#39;, \u0026#39;assets\u0026#39;, \u0026#39;js\u0026#39;), }, mode: \u0026#39;production\u0026#39;, module: { rules: [ { test: /\\.(scss)$/, use: [ MiniCssExtractPlugin.loader, // Extract CSS to a separate file \u0026#39;css-loader\u0026#39;, \u0026#39;postcss-loader\u0026#39;, { loader: \u0026#39;sass-loader\u0026#39;, options: { sourceMap: true, // Enable source maps for debugging }, }, ], }, { test: /\\.(woff2?|ttf|eot|svg)(\\?v=\\d+\\.\\d+\\.\\d+)?$/, use: [ { loader: \u0026#39;file-loader\u0026#39;, options: { name: \u0026#39;[name].[ext]\u0026#39;, outputPath: \u0026#39;../fonts/\u0026#39;, // Adjust the output path as needed }, }, ], }, ], }, plugins: [ new HtmlWebpackPlugin({ template: \u0026#39;./web/src/index.html\u0026#39;, // Provide your HTML template file }), new MiniCssExtractPlugin({ filename: \u0026#39;../css/styles.css\u0026#39;, // Output path for CSS }), ], }; In this configuration: We\u0026rsquo;ve added the MiniCssExtractPlugin to extract the CSS into a separate file, which is a best practice for production builds. The file-loader is configured to handle font files and place them in the desired output path. The postcss-loader is used with autoprefixer to add necessary vendor prefixes to your CSS. Make sure to adjust the paths and filenames as needed to match your project structure. Install Additional Plugins: You\u0026rsquo;ll need to install the html-webpack-plugin and mini-css-extract-plugin if you haven\u0026rsquo;t already: npm install html-webpack-plugin mini-css-extract-plugin --save-dev Update HTML File: Ensure that your HTML file (specified in the HtmlWebpackPlugin) includes the CSS and JavaScript files correctly. For example: \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html lang=\u0026#34;en\u0026#34;\u0026gt; \u0026lt;head\u0026gt; \u0026lt;meta charset=\u0026#34;UTF-8\u0026#34;\u0026gt; \u0026lt;meta name=\u0026#34;viewport\u0026#34; content=\u0026#34;width=device-width, initial-scale=1.0\u0026#34;\u0026gt; \u0026lt;title\u0026gt;Your App\u0026lt;/title\u0026gt; \u0026lt;link rel=\u0026#34;stylesheet\u0026#34; href=\u0026#34;../css/styles.css\u0026#34;\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;!-- Your HTML content here --\u0026gt; \u0026lt;script src=\u0026#34;bundle.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; Build Your Project: Now, you can build your project in production mode: webpack --mode production This will generate a production-ready bundle with your Sass styles, fonts, and Bootstrap Icons properly bundled and optimized. Ensure that you\u0026rsquo;ve properly configured the paths and filenames according to your project structure, and your Bootstrap Icons should be bundled successfully with Webpack in production mode. ","date":"09-01-2022","objectID":"/posts/development/how-to-bundle-bootstrap-icons-that-have-sass-and-font-resources-with/:0:0","tags":null,"title":"How To Bundle Bootstrap-Icons That Have Sass And Font Resources With…","uri":"/posts/development/how-to-bundle-bootstrap-icons-that-have-sass-and-font-resources-with/#"},{"categories":["Development"],"collections":null,"content":"It appears that you want to remove the generation of license comments in your webpack bundle. To achieve this, you can set the extractComments option of the TerserPlugin to false. This will prevent the plugin from extracting and adding license comments to your bundle. Here\u0026rsquo;s how you can modify your webpack configuration: const path = require(\u0026#39;path\u0026#39;); const TerserPlugin = require(\u0026#39;terser-webpack-plugin\u0026#39;); module.exports = { optimization: { minimize: true, minimizer: [ new TerserPlugin({ extractComments: false, // Set this option to false }), ], }, entry: \u0026#39;./web/src/js/main.js\u0026#39;, output: { filename: \u0026#39;bundle.js\u0026#39;, path: path.resolve(__dirname, \u0026#39;web\u0026#39;, \u0026#39;static\u0026#39;, \u0026#39;assets\u0026#39;, \u0026#39;js\u0026#39;), } }; By setting extractComments to false, the TerserPlugin will not include license comments in your minified bundle. This should help you achieve your goal of removing the generated license file from your webpack output. ","date":"09-01-2022","objectID":"/posts/development/how-to-remove-generated-license-file-webpack/:0:0","tags":null,"title":"How To Remove generated license file webpack","uri":"/posts/development/how-to-remove-generated-license-file-webpack/#"},{"categories":["Development"],"collections":null,"content":"In the provided HTML code snippet, it appears to be a part of a Bootstrap navigation menu that is using Go\u0026rsquo;s HTML template syntax to conditionally add the \u0026ldquo;active\u0026rdquo; class to a navigation link based on a condition. In this case, it is checking whether the current page\u0026rsquo;s title is equal to \u0026ldquo;Home\u0026rdquo; and adding the \u0026ldquo;active\u0026rdquo; class if the condition is true. Let me break down the code for you: \u0026lt;li class=\u0026quot;nav-item\u0026quot;\u0026gt;: This is an HTML list item element with the class \u0026ldquo;nav-item.\u0026rdquo; It is typically used for a navigation menu item. \u0026lt;a class=\u0026quot;nav-link {{if (eq .title \u0026quot;Home\u0026quot;)}}active{{end}}\u0026quot; href=\u0026quot;/groups\u0026quot;\u0026gt;Groups\u0026lt;/a\u0026gt;: This is a hyperlink (\u0026lt;a\u0026gt;) element with the class \u0026ldquo;nav-link\u0026rdquo; for styling purposes. The {{if (eq .title \u0026quot;Home\u0026quot;)}}active{{end}} part is using Go\u0026rsquo;s template syntax to conditionally add the \u0026ldquo;active\u0026rdquo; class to the anchor element based on a condition. {{if (eq .title \u0026quot;Home\u0026quot;)}}: This part starts an if statement in Go\u0026rsquo;s template syntax. (eq .title \u0026quot;Home\u0026quot;): This is the condition being checked. It compares the value of .title (a variable or data available in the template) to the string \u0026ldquo;Home\u0026rdquo; using the eq function. If they are equal, the condition is true. active: If the condition is true, \u0026ldquo;active\u0026rdquo; is added to the class attribute of the anchor element. {{end}}: This marks the end of the if statement. So, if the value of .title is \u0026ldquo;Home,\u0026rdquo; the resulting HTML will look like this: \u0026lt;li class=\u0026#34;nav-item\u0026#34;\u0026gt; \u0026lt;a class=\u0026#34;nav-link active\u0026#34; href=\u0026#34;/groups\u0026#34;\u0026gt;Groups\u0026lt;/a\u0026gt; \u0026lt;/li\u0026gt; Otherwise, if the value of .title is not \u0026ldquo;Home,\u0026rdquo; the \u0026ldquo;active\u0026rdquo; class will not be added to the anchor element, and it will look like this: \u0026lt;li class=\u0026#34;nav-item\u0026#34;\u0026gt; \u0026lt;a class=\u0026#34;nav-link\u0026#34; href=\u0026#34;/groups\u0026#34;\u0026gt;Groups\u0026lt;/a\u0026gt; \u0026lt;/li\u0026gt; This technique is commonly used to highlight the currently active page in a navigation menu by applying specific styling (in this case, the \u0026ldquo;active\u0026rdquo; class) to the corresponding menu item. ","date":"07-01-2022","objectID":"/posts/development/go-html-template-active-nav-link-bootstrap/:0:0","tags":null,"title":"Go HTML Template Active nav-link Bootstrap","uri":"/posts/development/go-html-template-active-nav-link-bootstrap/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re looking to determine whether a Windows machine is locked or not via SSH, you can use PowerShell commands to achieve this. The Get-Process command with the logonui argument can be used to check the status of the \u0026ldquo;LogonUI\u0026rdquo; process, which is responsible for the Windows login screen. If the \u0026ldquo;LogonUI\u0026rdquo; process is running, it typically means that the user is logged in and the machine is not locked. If the process is not running, it might indicate that the machine is locked or at the login screen. Here\u0026rsquo;s how you can use PowerShell to achieve this: $logonUIProcess = Get-Process -Name \u0026#34;logonui\u0026#34; -ErrorAction SilentlyContinue if ($logonUIProcess) { Write-Host \u0026#34;Machine is not locked. User is logged in.\u0026#34; } else { Write-Host \u0026#34;Machine is locked or at the login screen.\u0026#34; } You can execute this PowerShell script over SSH on a Windows machine to check the status of the \u0026ldquo;LogonUI\u0026rdquo; process and determine whether the machine is locked or not. Please note that this approach assumes that you have PowerShell available and you\u0026rsquo;re executing the script on the Windows machine you want to check. Additionally, this method only checks the status of the \u0026ldquo;LogonUI\u0026rdquo; process and might not account for all possible scenarios. It\u0026rsquo;s always a good idea to test the script in your specific environment to ensure it provides accurate results. ","date":"24-12-2021","objectID":"/posts/development/check-whenever-locked-or-not-via-ssh-on-windows/:0:0","tags":null,"title":"Check Whenever locked or not via SSH on Windows","uri":"/posts/development/check-whenever-locked-or-not-via-ssh-on-windows/#"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the process of remotely locking a Windows 11 computer from your Mac using sleepwatcher. This will require SSH access to your Windows machine configured with a public key instead of a password. Follow these steps to set up remote locking: ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:0:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites SSH Configuration: Ensure that you have SSH access to your Windows 11 computer from your Mac, and it is set up to use a public key for authentication instead of a password. If you haven\u0026rsquo;t set up SSH with public key authentication, you can find guides online for this process. Homebrew: Make sure you have Homebrew installed on your Mac. If you don\u0026rsquo;t have it, you can install it by following the instructions on the Homebrew website. ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:1:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Setup Lock Task Create a Task in Task Scheduler: On your Windows 11 machine, open Task Scheduler and create a new task with the following settings: Action: Start a program Program/script: rundll32.exe Add arguments (optional): user32.dll,LockWorkStation Save the task with a name, e.g., \u0026ldquo;Lock.\u0026rdquo; ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:2:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#setup-lock-task"},{"categories":["Development"],"collections":null,"content":"Install sleepwatcher Install sleepwatcher with Homebrew: Open your Terminal on your Mac and run the following command to install sleepwatcher: brew install sleepwatcher ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:3:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#install-sleepwatcher"},{"categories":["Development"],"collections":null,"content":"Configure sleepwatcher Create a Configuration File for sleepwatcher: Create a new file named .sleep in your home directory (usually /Users/yourusername/) using a text editor like Vim. Replace yourusername with your actual username. Use the following command: vim ~/.sleep Add the following content to the .sleep file: #!/bin/bash ssh yourusername@your-windows-pc schtasks /RUN /TN \u0026#34;Lock\u0026#34; Make sure to replace yourusername with your Mac\u0026rsquo;s username and your-windows-pc with the hostname or IP address of your Windows 11 PC. Modify sleepwatcher Configuration: Edit the sleepwatcher configuration file using Vim. Run the following command: vim /usr/local/Cellar/sleepwatcher/2.2.1/homebrew.mxcl.sleepwatcher.plist Find the line that says \u0026lt;string\u0026gt;-s\u0026lt;/string\u0026gt; and replace it with \u0026lt;string\u0026gt;-D\u0026lt;/string\u0026gt;. This change ensures that sleepwatcher runs in the background. ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:4:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#configure-sleepwatcher"},{"categories":["Development"],"collections":null,"content":"Start sleepwatcher Service Start the sleepwatcher Service: Start the sleepwatcher service using the following command: brew services start sleepwatcher This will initiate the sleepwatcher service and make it run automatically in the background. ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:5:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#start-sleepwatcher-service"},{"categories":["Development"],"collections":null,"content":"Lock Windows 11 Remotely Now, whenever your Mac goes to sleep, sleepwatcher will execute the .sleep script, which in turn will SSH into your Windows 11 machine and run the Task Scheduler task, locking your Windows 11 session remotely. With these steps, you can remotely lock your Windows 11 computer from your Mac using sleepwatcher and SSH with public key authentication. ","date":"24-12-2021","objectID":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/:6:0","tags":null,"title":"Lock Windows 11 Remotely Using sleepwatcher Mac","uri":"/posts/development/lock-windows-11-remotely-using-sleepwatcher-mac/#lock-windows-11-remotely"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the steps to set up an SSH server on a Windows system and configure it to use public key authentication instead of a password. This enhances security by eliminating the need for password-based access and relying on cryptographic keys for authentication. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:0:0","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you start, ensure you have the following: A Windows machine with the SSH server (sshd) installed. An SSH client from which you will connect to the Windows server. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:1:0","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Steps to Set Up SSH Server With Public Key Authentication ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:2:0","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#steps-to-set-up-ssh-server-with-public-key-authentication"},{"categories":["Development"],"collections":null,"content":"1. Generate SSH Key Pair on the Client If you haven\u0026rsquo;t already, generate an SSH key pair on your client machine. You can use the ssh-keygen command to do this. Replace [your_email@example.com] with your email address. ssh-keygen -t rsa -b 4096 -C \u0026#34;your_email@example.com\u0026#34; This will generate a public key (id_rsa.pub) and a private key (id_rsa) in your ~/.ssh directory. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:2:1","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#1-generate-ssh-key-pair-on-the-client"},{"categories":["Development"],"collections":null,"content":"2. Copy the Public Key to the Server Now, you need to copy the public key from your client machine to the Windows SSH server. You can use ssh-copy-id if available, or manually add the public key to the authorized_keys file on the server. # Using ssh-copy-id (if available) ssh-copy-id username@your_server_ip Alternatively, manually add the public key to ~/.ssh/authorized_keys on the Windows server. You can use any text editor to do this, such as Notepad. # On the server C:\\Users\\\u0026lt;username\u0026gt;\\.ssh\\authorized_keys Paste your public key (id_rsa.pub) content into the authorized_keys file. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:2:2","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#2-copy-the-public-key-to-the-server"},{"categories":["Development"],"collections":null,"content":"3. Configure SSH Server Edit the SSH server configuration file located at C:\\ProgramData\\ssh\\sshd_config. You can use a text editor like Notepad to make the changes. # Open C:\\ProgramData\\ssh\\sshd_config # Comment out the following lines if they exist #Match Group administrators # AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys # Ensure the following lines are configured as follows PubkeyAuthentication yes PasswordAuthentication no Make sure you uncomment (remove the # at the beginning of) the PubkeyAuthentication and PasswordAuthentication lines as shown above. This enforces public key authentication and disables password-based authentication. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:2:3","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#3-configure-ssh-server"},{"categories":["Development"],"collections":null,"content":"4. Restart the SSH Service To apply the changes, restart the SSH service. You can do this via PowerShell: Restart-Service sshd ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:2:4","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#4-restart-the-ssh-service"},{"categories":["Development"],"collections":null,"content":"Conclusion You have successfully set up your Windows SSH server to use public key authentication instead of passwords, which improves the security of your SSH access. Now, you can log in to your Windows server using your private key from the SSH client. Remember to keep your private key secure, and never share it with anyone. ","date":"24-12-2021","objectID":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/:3:0","tags":null,"title":"Setting Up Windows SSH Server With Public Key Authentication","uri":"/posts/development/setting-up-windows-ssh-server-with-public-key-authentication/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with the backtick key (`) not working when using Barrier to connect a Mac Server to a Windows 11 Client, you\u0026rsquo;re not alone. This can be a frustrating problem, but fortunately, there is a solution. Problem Description: When using Barrier, a software KVM (keyboard, video, mouse) switch, to control your Windows 11 Client from a Mac Server, the backtick key (`) might not function as expected. This issue can disrupt your workflow, especially if you rely on this key for tasks like entering command-line commands or writing code. Solution: Using PowerToys to Remap the Backtick Key To resolve this issue, you can use PowerToys, a powerful utility for Windows that includes a feature called \u0026ldquo;Keyboard Manager.\u0026rdquo; Here\u0026rsquo;s how to remap the backtick key to make it work correctly: Install PowerToys: If you haven\u0026rsquo;t already installed PowerToys, you can download it from the official GitHub repository (https://github.com/microsoft/PowerToys/releases) and follow the installation instructions. Open PowerToys Settings: Once PowerToys is installed, open the application. Navigate to Keyboard Manager: In the PowerToys Settings window, find and click on \u0026ldquo;Keyboard Manager\u0026rdquo; in the left sidebar. Enable Keyboard Manager: Toggle the switch to enable Keyboard Manager. Remap the Backtick Key: Under the \u0026ldquo;Key Remapping\u0026rdquo; section, click on the \u0026ldquo;Enable\u0026rdquo; button. Click on the \u0026ldquo;+ Add a new key remap\u0026rdquo; button. In the \u0026ldquo;Physical Key\u0026rdquo; column, select the key that represents the backtick key on your keyboard (usually the one below the escape key). In the \u0026ldquo;Mapped To\u0026rdquo; column, choose the key you want to remap it to. You can select a different key or even another key combination, depending on your preference. Click \u0026ldquo;OK\u0026rdquo; to save the remapping. Apply Changes: After setting up the remap, make sure to click the \u0026ldquo;OK\u0026rdquo; or \u0026ldquo;Apply\u0026rdquo; button in the PowerToys Settings window to save your changes. Test the Key: Test whether the backtick key now functions correctly in your Windows 11 Client while using Barrier to control it from your Mac Server. With PowerToys and the Keyboard Manager feature, you can easily remap keys and customize your keyboard layout to suit your needs, fixing issues like the backtick key not working as expected in your specific setup. This solution should resolve the backtick key issue and allow you to use Barrier seamlessly between your Mac Server and Windows 11 Client. ","date":"23-12-2021","objectID":"/posts/development/fixing-the-backtick-key-issue-with-barrier-from-mac-server-to-windows-11-client/:0:0","tags":null,"title":"Fixing The Backtick Key Issue With Barrier From Mac Server To Windows 11 Client","uri":"/posts/development/fixing-the-backtick-key-issue-with-barrier-from-mac-server-to-windows-11-client/#"},{"categories":["Development"],"collections":null,"content":"To add \u0026ldquo;- \u0026quot; after tabs on Visual Studio Code (VSCode) using regex, you can follow these steps: Open VSCode. Press Ctrl + H (or Cmd + H on macOS) to open the Find and Replace panel. In the Find input box, input the following regular expression to match tabs at the beginning of each line: (^\\t*)Make sure to enable the \u0026ldquo;Use Regular Expression\u0026rdquo; option by clicking the .* icon in the Find panel. In the Replace input box, input the following: $0- This will insert \u0026ldquo;- \u0026quot; after each tab matched by the regular expression. Click \u0026ldquo;Replace All\u0026rdquo; in the Find and Replace panel. The result will be as follows: - Security - Cashout - Knowledge (Pengetahuan) - Education - Post 5 Articles from Education - Experience - Post 5 Knowledge ArticlesThis will add \u0026ldquo;- \u0026quot; after tabs at the beginning of each line, as specified in your example. ","date":"19-12-2021","objectID":"/posts/development/add-after-tabs-on-vscode-using-regex-/:0:0","tags":null,"title":"Add -- - after tabs on VSCODE using Regex-","uri":"/posts/development/add-after-tabs-on-vscode-using-regex-/#"},{"categories":["Development"],"collections":null,"content":"To check if unattended upgrades will run today, you can use the systemctl list-timers command with the apt-daily.timer unit. This will provide information about the timer and when it is scheduled to run. Here\u0026rsquo;s how you can do it: systemctl list-timers apt-daily.timer Running this command will display output similar to the following: NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2023-09-01 06:25:00 UTC 12h left Tue 2023-08-31 06:25:00 UTC 11h ago apt-daily.timer apt-daily.serviceIn this example output, you can see information about the next scheduled run of the apt-daily.timer unit, including the time remaining until it runs again. You can check the \u0026ldquo;NEXT\u0026rdquo; column to see when the timer is set to trigger next. Please note that the output may vary based on your system\u0026rsquo;s configuration and the current date and time. ","date":"10-12-2021","objectID":"/posts/development/check-apt-unattended-upgrades-will-run-today/:0:0","tags":null,"title":"Check apt unattended upgrades will run today","uri":"/posts/development/check-apt-unattended-upgrades-will-run-today/#"},{"categories":["Development"],"collections":null,"content":"To enable password login on a local PostgreSQL database, you need to modify the pg_hba.conf file and then restart the PostgreSQL service. Here are the steps in markdown format: ## Enable Password Login on Local PostgreSQL 1. Open the `pg_hba.conf` file for editing using a text editor. You can use `vim` as you mentioned or any other text editor of your choice. Replace `/usr/local/var/postgres/pg_hba.conf` with the correct path to your `pg_hba.conf` file if it\u0026#39;s located elsewhere. ```shell vim /usr/local/var/postgres/pg_hba.conf In the pg_hba.conf file, locate the line that corresponds to the local connection method. It typically looks like this: # \u0026#34;local\u0026#34; is for Unix domain socket connections only local all all trust Change the authentication method from \u0026ldquo;trust\u0026rdquo; to \u0026ldquo;md5.\u0026rdquo; Your modified line should look like this: # \u0026#34;local\u0026#34; is for Unix domain socket connections only local all all md5 This change ensures that local connections require a password. Save the pg_hba.conf file and exit the text editor. After modifying the configuration file, you need to restart the PostgreSQL service for the changes to take effect. You can use brew services to do this: brew services restart postgres Now, local connections to your PostgreSQL database will require a password for authentication. Make sure to replace `/usr/local/var/postgres/pg_hba.conf` with the actual path to your `pg_hba.conf` file, which may vary depending on your PostgreSQL installation and operating system.","date":"26-11-2021","objectID":"/posts/development/postgres-enable-password-login-on-local/:0:0","tags":null,"title":"Postgres Enable Password Login on Local","uri":"/posts/development/postgres-enable-password-login-on-local/#"},{"categories":["Development"],"collections":null,"content":"Basic Feature: Create Employee Scenario: WITH ALL REQUIRED FIELDS IS SUCCESSFUL Given user wants to create an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 100 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 102 | Mobile | +1 | 2141112222 | | | 103 | Office | +1 | 8362223000 | 333 | When user saves the new employee \u0026#39;WITH ALL REQUIRED FIELDS\u0026#39; Then the save \u0026#39;IS SUCCESSFUL\u0026#39; ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:1:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#basic"},{"categories":["Development"],"collections":null,"content":"Create Employee Feature: Create Employee Scenario Outline: \u0026lt;testCase\u0026gt; \u0026lt;expectedResult\u0026gt; Given user wants to create an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 110 | \u0026lt;firstName\u0026gt; | \u0026lt;lastName\u0026gt; | \u0026lt;dateOfBirth\u0026gt; | \u0026lt;startDate\u0026gt; | \u0026lt;employmentType\u0026gt; | \u0026lt;email\u0026gt; | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 111 | Mobile | +1 | 2141112222 | | | 112 | Office | +1 | 8362223000 | 333 | When user saves the new employee \u0026#39;\u0026lt;testCase\u0026gt;\u0026#39; Then the save \u0026#39;\u0026lt;expectedResult\u0026gt;\u0026#39; Examples: | testCase | expectedResult | firstName | lastName | dateOfBirth | startDate | employmentType | email | | WITHOUT FIRST NAME | FAILS | | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT LAST NAME | FAILS | Rachel | | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT DATE OF BIRTH | FAILS | Rachel | Green | | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT START DATE | FAILS | Rachel | Green | 1990-01-01 | | Permanent | rachel.green@fs.com | | WITHOUT EMPLOYMENT TYPE | FAILS | Rachel | Green | 1990-01-01 | 2018-01-01 | | rachel.green@fs.com | | WITHOUT EMAIL | FAILS | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | | | WITH ALL REQUIRED FIELDS | IS SUCCESSFUL | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:2:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#create-employee"},{"categories":["Development"],"collections":null,"content":"Get Employee Feature: Get Employee Background: Given an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 200 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 201 | Mobile | +1 | 2141112222 | | | 202 | Office | +1 | 8362223000 | 333 | When employee already exists Scenario: GET BY ID When user wants to get employee by id 200 Then the get \u0026#39;IS SUCCESSFUL\u0026#39; And following employee is returned | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 200 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | And following employee phone numbers are returned | id | type | isdCode | phoneNumber | extension | | 201 | Mobile | +1 | 2141112222 | | | 202 | Office | +1 | 8362223000 | 333 | ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:3:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#get-employee"},{"categories":["Development"],"collections":null,"content":"Update Employee Feature: Update Employee Background: Given an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 300 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 301 | Mobile | +1 | 2141112222 | | | 302 | Office | +1 | 8362223000 | 333 | When employee already exists Scenario Outline: \u0026lt;testCase\u0026gt; \u0026lt;expectedResult\u0026gt; Given user wants to update an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | \u0026lt;id\u0026gt; | \u0026lt;firstName\u0026gt; | \u0026lt;lastName\u0026gt; | \u0026lt;dateOfBirth\u0026gt; | \u0026lt;startDate\u0026gt; | \u0026lt;employmentType\u0026gt; | \u0026lt;email\u0026gt; | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 301 | Mobile | +1 | 2141112222 | | | 302 | Office | +1 | 8362223000 | 333 | When user saves the employee \u0026#39;\u0026lt;testCase\u0026gt;\u0026#39; Then the save \u0026#39;\u0026lt;expectedResult\u0026gt;\u0026#39; Examples: | testCase | expectedResult | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | WITHOUT ID | FAILS | | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT FIRST NAME | FAILS | 300 | | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT LAST NAME | FAILS | 300 | Rachel | | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT DATE OF BIRTH | FAILS | 300 | Rachel | Green | | 2018-01-01 | Permanent | rachel.green@fs.com | | WITHOUT START DATE | FAILS | 300 | Rachel | Green | 1990-01-01 | | Permanent | rachel.green@fs.com | | WITHOUT EMPLOYMENT TYPE | FAILS | 300 | Rachel | Green | 1990-01-01 | 2018-01-01 | | rachel.green@fs.com | | WITHOUT EMAIL | FAILS | 300 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | | | WITH ALL REQUIRED FIELDS | IS SUCCESSFUL | 300 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:4:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#update-employee"},{"categories":["Development"],"collections":null,"content":"Delete Employee Feature: Delete Employee Background: Given an employee with the following attributes | id | firstName | lastName | dateOfBirth | startDate | employmentType | email | | 400 | Rachel | Green | 1990-01-01 | 2018-01-01 | Permanent | rachel.green@fs.com | And with the following phone numbers | id | type | isdCode | phoneNumber | extension | | 401 | Mobile | +1 | 2141112222 | | | 402 | Office | +1 | 8362223000 | 333 | When employee already exists Scenario: DELETE BY ID When user wants to delete employee by id 400 Then the delete \u0026#39;IS SUCCESSFUL\u0026#39; ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:5:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#delete-employee"},{"categories":["Development"],"collections":null,"content":"References https://medium.com/@bcarunmail/using-cucumber-datatable-for-crud-operations-7b00f7cac23f ","date":"24-11-2021","objectID":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/:6:0","tags":["project management"],"title":"User Story BDD Cucumber CRUD Operations Example","uri":"/posts/development/user-story-bdd-cucumber-crud-operations-example-/#references"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with mouse scrolling in your Tmux sessions on iTerm2, you can follow these steps to troubleshoot and resolve the problem: Check iTerm2 Setting: Make sure that the \u0026ldquo;Scroll wheels send arrow keys when in alternate screen mode\u0026rdquo; option is enabled in iTerm2. You can find this setting in iTerm2\u0026rsquo;s preferences. Go to iTerm2 \u0026gt; Preferences \u0026gt; Advanced, and ensure that the \u0026ldquo;Scroll wheels send arrow keys\u0026hellip;\u0026rdquo; option is set to \u0026ldquo;Yes.\u0026rdquo; Configure Tmux Plugins: To enhance mouse support in Tmux, you can use the tmux-plugins/tpm and nhdaly/tmux-better-mouse-mode plugins. These plugins enable better mouse interaction within Tmux sessions. Add the following lines to your ~/.tmux.conf file: set -g @plugin \u0026#39;tmux-plugins/tpm\u0026#39; set -g @plugin \u0026#39;nhdaly/tmux-better-mouse-mode\u0026#39; set -g @emulate-scroll-for-no-mouse-alternate-buffer \u0026#34;on\u0026#34; This configuration sets up the necessary plugins and ensures that mouse scrolling works even when the alternate buffer is active. Install Tmux Plugin Manager (TPM): If you haven\u0026rsquo;t already, install the Tmux Plugin Manager (TPM) by running the following command in your terminal: git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm Reload Tmux Configuration: After adding the plugin configuration to your ~/.tmux.conf, you need to reload your Tmux configuration. You can do this from within a Tmux session by pressing Prefix (usually Ctrl-b by default) and then typing: :source-file ~/.tmux.conf Install Plugins: Inside your Tmux session, activate TPM by pressing your Tmux prefix key (e.g., Ctrl-b) followed by I (capital \u0026ldquo;i\u0026rdquo;) to install the configured plugins. \u0026lt;Prefix\u0026gt; + I TPM will download and install the plugins you specified in your ~/.tmux.conf file. Restart Tmux: To ensure the changes take effect, you can either restart your Tmux session or create a new one. After following these steps, mouse scrolling should work as expected in your Tmux sessions within iTerm2. If you encounter any issues, double-check your configuration and make sure you\u0026rsquo;ve correctly installed and activated the required plugins. ","date":"21-11-2021","objectID":"/posts/development/troubleshooting-mouse-scrolling-in-tmux-sessions-on-iterm2/:0:0","tags":null,"title":"Troubleshooting Mouse Scrolling In Tmux Sessions On Iterm2","uri":"/posts/development/troubleshooting-mouse-scrolling-in-tmux-sessions-on-iterm2/#"},{"categories":["Development"],"collections":null,"content":"It looks like you\u0026rsquo;re trying to set up debugging for a Golang application in Visual Studio Code, specifically focusing on enabling input reading during debugging. The configuration you\u0026rsquo;ve provided involves using the launch.json and task.json files to set up the debugger and tasks for your project. Here\u0026rsquo;s a breakdown of the configuration you\u0026rsquo;ve shared: ","date":"19-11-2021","objectID":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/:0:0","tags":null,"title":"Enable Read Input When Debugging Golang on Visual Studio Code","uri":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/#"},{"categories":["Development"],"collections":null,"content":"launch.json Configuration In this configuration, you\u0026rsquo;re defining a launch configuration to connect to a remote server for debugging. Here are the key attributes: \u0026quot;name\u0026quot;: \u0026quot;Connect to server\u0026quot;: Specifies the name of this configuration. \u0026quot;type\u0026quot;: \u0026quot;go\u0026quot;: Specifies that this is a Go debugging configuration. \u0026quot;request\u0026quot;: \u0026quot;attach\u0026quot;: Indicates that you want to attach the debugger to a running process. \u0026quot;preLaunchTask\u0026quot;: \u0026quot;delve\u0026quot;: Specifies the task that should be executed before launching the debugger. \u0026quot;mode\u0026quot;: \u0026quot;remote\u0026quot;: Specifies that you\u0026rsquo;re debugging a remote process. \u0026quot;remotePath\u0026quot;: \u0026quot;${workspaceFolder}\u0026quot;: Specifies the remote path to your workspace folder. \u0026quot;port\u0026quot;: 23456: Specifies the port on which the debugger will listen. \u0026quot;host\u0026quot;: \u0026quot;127.0.0.1\u0026quot;: Specifies the host (localhost) for the remote debugging. \u0026quot;cwd\u0026quot;: \u0026quot;${workspaceFolder}\u0026quot;: Specifies the current working directory for the debugger. ","date":"19-11-2021","objectID":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/:1:0","tags":null,"title":"Enable Read Input When Debugging Golang on Visual Studio Code","uri":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/#launchjson-configuration"},{"categories":["Development"],"collections":null,"content":"task.json Configuration In this configuration, you\u0026rsquo;re defining a task that uses the Delve debugger to launch your Go application in headless mode. Here are the key attributes: \u0026quot;label\u0026quot;: \u0026quot;delve\u0026quot;: Specifies the label for this task. \u0026quot;type\u0026quot;: \u0026quot;shell\u0026quot;: Specifies that this task runs a shell command. \u0026quot;command\u0026quot;: \u0026quot;dlv debug --headless --listen=:23456 --api-version=2 \\\u0026quot;${workspaceFolder}\\\u0026quot;\u0026quot;: Specifies the Delve command to run your application in headless mode and listen on the specified port. \u0026quot;isBackground\u0026quot;: true: Indicates that this task runs in the background. \u0026quot;presentation\u0026quot;: Specifies the presentation options for the task. \u0026quot;problemMatcher\u0026quot;: Specifies how Visual Studio Code should match problems in the output. ","date":"19-11-2021","objectID":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/:2:0","tags":null,"title":"Enable Read Input When Debugging Golang on Visual Studio Code","uri":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/#taskjson-configuration"},{"categories":["Development"],"collections":null,"content":"Enabling Input Reading Based on the configuration you\u0026rsquo;ve provided, it seems that input reading during debugging is not explicitly enabled. However, since Delve is a powerful debugger for Go that supports interactive debugging, you might be able to achieve input reading during debugging sessions by interacting directly with your running program in the integrated terminal of Visual Studio Code. For instance, you could open the integrated terminal and use it to send input to your running Go program while it\u0026rsquo;s paused during a debugging session. This can be done by interacting with the terminal as you would when running your program without debugging. Please note that the specific way of interacting with your program during debugging might depend on the nature of your application and how it reads input. Remember that the configurations you\u0026rsquo;ve provided are for remote debugging, so make sure that your remote application is reachable and properly set up for debugging. If you encounter any issues or need further assistance, please feel free to ask! ","date":"19-11-2021","objectID":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/:3:0","tags":null,"title":"Enable Read Input When Debugging Golang on Visual Studio Code","uri":"/posts/development/enable-read-input-when-debugging-golang-on-visual-studio-code/#enabling-input-reading"},{"categories":["Development"],"collections":null,"content":"If you are encountering issues with Barrier when SSL is enabled and you are getting an error related to the SSL certificate not being found, you can follow these steps to generate and configure the SSL certificate. ","date":"16-11-2021","objectID":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/:0:0","tags":null,"title":"Troubleshooting Barrier SSL Certificate Not Found","uri":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/#"},{"categories":["Development"],"collections":null,"content":"Generating the SSL Certificate You can generate a self-signed SSL certificate for Barrier using OpenSSL. Here are the steps: Open your terminal. Navigate to the Barrier configuration directory: cd ~/Library/Application\\ Support/barrier/SSL Generate the SSL certificate and key using OpenSSL: openssl req -x509 -nodes -days 365 -subj /CN=Barrier -newkey rsa:4096 -keyout Barrier.pem -out Barrier.pem This command will create a self-signed SSL certificate named Barrier.pem in the specified directory. ","date":"16-11-2021","objectID":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/:1:0","tags":null,"title":"Troubleshooting Barrier SSL Certificate Not Found","uri":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/#generating-the-ssl-certificate"},{"categories":["Development"],"collections":null,"content":"Configuring Barrier to Use the SSL Certificate Once you have generated the SSL certificate, you need to configure Barrier to use it. Follow these steps: Open Barrier. In the Barrier configuration window, go to the \u0026ldquo;Security\u0026rdquo; tab. Check the box that says \u0026ldquo;Use SSL encryption.\u0026rdquo; In the \u0026ldquo;SSL Certificate File\u0026rdquo; field, browse to the location where you generated the Barrier.pem file. In this case, it\u0026rsquo;s ~/Library/Application\\ Support/barrier/SSL/Barrier.pem. Save your Barrier configuration. Restart Barrier for the changes to take effect. By following these steps, you should no longer encounter the SSL certificate not found error when using Barrier with SSL enabled. Your self-signed certificate will be used for encryption purposes. Keep in mind that self-signed certificates are not as secure as certificates issued by trusted Certificate Authorities, but they can still provide encryption for your Barrier connection. ","date":"16-11-2021","objectID":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/:2:0","tags":null,"title":"Troubleshooting Barrier SSL Certificate Not Found","uri":"/posts/development/troubleshooting-barrier-ssl-certificate-not-found/#configuring-barrier-to-use-the-ssl-certificate"},{"categories":["Development"],"collections":null,"content":"To center align a block widget in the footer of a WordPress website, you can use custom CSS code. Here\u0026rsquo;s how you can do it: Access Your WordPress Dashboard: Log in to your WordPress admin dashboard. Navigate to the Customizer: In the dashboard, go to \u0026ldquo;Appearance\u0026rdquo; and then click on \u0026ldquo;Customize.\u0026rdquo; Open Additional CSS: Look for the \u0026ldquo;Additional CSS\u0026rdquo; option in the Customizer menu. This is where you can add your custom CSS code. Add the Custom CSS: Paste the following CSS code into the \u0026ldquo;Additional CSS\u0026rdquo; box: div#footer-widgets { text-align: center; } .wp-block-social-links { display: inline-flex; } Preview and Publish: You should see the changes in the live preview on the right side of the Customizer. Make sure the alignment looks as expected. If everything looks good, click the \u0026ldquo;Publish\u0026rdquo; button to save your changes. This CSS code will center-align the block widget with the class .wp-block-social-links within the div#footer-widgets. Make sure to adjust the class and IDs in the code to match your specific widget and container elements if they are different. ","date":"02-11-2021","objectID":"/posts/development/wordpress-center-align-block-widget-on-footer-widget/:0:0","tags":null,"title":"Wordpress Center Align Block Widget on Footer Widget","uri":"/posts/development/wordpress-center-align-block-widget-on-footer-widget/#"},{"categories":["DevOps"],"collections":null,"content":"This guide will walk you through setting up a system service to automatically launch your desired VirtualBox VMs, saving you precious time and effort. ","date":"08-10-2021","objectID":"/posts/devops/automate-your-virtualbox-vms-with-autostart/:0:0","tags":["virtualbox","windows"],"title":"Automate Your VirtualBox VMs with Autostart","uri":"/posts/devops/automate-your-virtualbox-vms-with-autostart/#"},{"categories":["DevOps"],"collections":null,"content":"Configure Autostart Properties Create a file named autostart.properties in the folder C:\\Users\\Admin\\.VirtualBox. Inside the file, define the following: default_policy = deny Admin = { allow = true startup_delay = 10 } default_policy = deny: This sets the default policy to deny all autostarts. Admin = { allow = true; startup_delay = 10 }: This allows the user \u0026ldquo;Admin\u0026rdquo; to start VMs with a 10-second delay. ","date":"08-10-2021","objectID":"/posts/devops/automate-your-virtualbox-vms-with-autostart/:0:1","tags":["virtualbox","windows"],"title":"Automate Your VirtualBox VMs with Autostart","uri":"/posts/devops/automate-your-virtualbox-vms-with-autostart/#configure-autostart-properties"},{"categories":["DevOps"],"collections":null,"content":"Set System Variable Create a new system variable named VBOXAUTOSTART_CONFIG and set its value to C:\\Users\\Admin\\.VirtualBox\\autostart.properties. ","date":"08-10-2021","objectID":"/posts/devops/automate-your-virtualbox-vms-with-autostart/:0:2","tags":["virtualbox","windows"],"title":"Automate Your VirtualBox VMs with Autostart","uri":"/posts/devops/automate-your-virtualbox-vms-with-autostart/#set-system-variable"},{"categories":["DevOps"],"collections":null,"content":"Install the VirtualBox Autostart Service Navigate to the VirtualBox installation directory: cd \u0026quot;C:\\Program Files\\Oracle\\VirtualBox\u0026quot; Run the following commands: .\\VBoxAutostartSvc.exe install --user=Admin .\\VBoxManage.exe modifyvm \u0026#34;virtual-pc\u0026#34; --autostart-enabled on Replace \u0026quot;virtual-pc\u0026quot; with the actual name of your VM you want to autostart. ","date":"08-10-2021","objectID":"/posts/devops/automate-your-virtualbox-vms-with-autostart/:0:3","tags":["virtualbox","windows"],"title":"Automate Your VirtualBox VMs with Autostart","uri":"/posts/devops/automate-your-virtualbox-vms-with-autostart/#install-the-virtualbox-autostart-service"},{"categories":["DevOps"],"collections":null,"content":"Check and Start the Service Use the sc command to check the service status: sc query VBoxAutostartSvcadmin-pcadmin Start the service: sc start VirtualBox Autostart Service VBoxAutostartSvcadmin-pcadmin Congratulations! Your VirtualBox VM is now set to autostart when your system boots. ","date":"08-10-2021","objectID":"/posts/devops/automate-your-virtualbox-vms-with-autostart/:0:4","tags":["virtualbox","windows"],"title":"Automate Your VirtualBox VMs with Autostart","uri":"/posts/devops/automate-your-virtualbox-vms-with-autostart/#check-and-start-the-service"},{"categories":["Development"],"collections":null,"content":"You want to fixing two common issues related to Nextcloud SSL and an SVG error related to ImageMagick. Let\u0026rsquo;s break down these instructions into steps for clarity: ","date":"17-09-2021","objectID":"/posts/development/fixing-ssl-issue-in-nextcloud/:0:0","tags":null,"title":"Fixing SSL Issue in Nextcloud","uri":"/posts/development/fixing-ssl-issue-in-nextcloud/#"},{"categories":["Development"],"collections":null,"content":"Fixing SSL Issue in Nextcloud If you\u0026rsquo;re encountering SSL-related issues in Nextcloud, you can try the following steps: Disable and Reenable the File External App: Log in to your Nextcloud server as an administrator. Go to the Nextcloud admin panel or settings. Find the \u0026ldquo;Apps\u0026rdquo; section and search for the \u0026ldquo;File External\u0026rdquo; app. Disable this app, wait for a moment, and then re-enable it. Check if the SSL issue is resolved. ","date":"17-09-2021","objectID":"/posts/development/fixing-ssl-issue-in-nextcloud/:1:0","tags":null,"title":"Fixing SSL Issue in Nextcloud","uri":"/posts/development/fixing-ssl-issue-in-nextcloud/#fixing-ssl-issue-in-nextcloud"},{"categories":["Development"],"collections":null,"content":"Fixing Imagicx SVG Error in Nextcloud (using Docker) If you\u0026rsquo;re experiencing SVG-related errors in Nextcloud within a Docker environment, you can try the following steps: Update and Install libmagickcore: Access your Nextcloud Docker container by running the following command: docker-compose exec app /bin/bash Update the package list and install libmagickcore-6.q16-6-extra: apt -y update apt -y install libmagickcore-6.q16-6-extra Exit the container\u0026rsquo;s shell: exit Restart Your Docker Compose Stack: After installing the required package, you should restart your Docker Compose stack to apply the changes. You can do this by running: docker-compose restart These steps should help you address the SSL and SVG error issues you are experiencing in Nextcloud. Make sure to replace docker-compose with the appropriate command if you\u0026rsquo;re using a different method to manage your Nextcloud instance. Please note that these instructions assume you have administrative access to your Nextcloud server and are comfortable working with Docker containers and the command line. Always back up your data and configurations before making any significant changes to your Nextcloud setup. ","date":"17-09-2021","objectID":"/posts/development/fixing-ssl-issue-in-nextcloud/:2:0","tags":null,"title":"Fixing SSL Issue in Nextcloud","uri":"/posts/development/fixing-ssl-issue-in-nextcloud/#fixing-imagicx-svg-error-in-nextcloud-using-docker"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the process of creating scripts to backup and restore a WordPress website running in Docker Compose. These scripts will help you safeguard your WordPress data and quickly restore it if needed. ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:0:0","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#"},{"categories":["Development"],"collections":null,"content":"Backup WordPress Volumes ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:1:0","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#backup-wordpress-volumes"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a Backup Script First, you need to create a backup script. This script will use the futurice/docker-volume-backup Docker image to back up the volumes associated with your WordPress and MariaDB containers. Add the following code to your docker-compose.yml file under the services section: backup: image: futurice/docker-volume-backup environment: BACKUP_CRON_EXPRESSION: \u0026#34;#\u0026#34; # Set your desired backup schedule volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - wordpress:/backup/wordpress:ro - mariadb:/backup/mariadb:ro - ./backups:/archive Replace # in BACKUP_CRON_EXPRESSION with your desired backup schedule in cron format (e.g., \u0026quot;0 2 * * *\u0026quot; for daily backups at 2:00 AM). ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:1:1","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#step-1-create-a-backup-script"},{"categories":["Development"],"collections":null,"content":"Step 2: Run the Backup Script After adding the backup service to your docker-compose.yml file, save the file and run the following command in your terminal: docker-compose exec backup ./backup.sh This command will execute the backup script and create backup archives in the ./backups directory with timestamped filenames. ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:1:2","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#step-2-run-the-backup-script"},{"categories":["Development"],"collections":null,"content":"Restore WordPress Volumes ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:2:0","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#restore-wordpress-volumes"},{"categories":["Development"],"collections":null,"content":"Step 1: Restore the Volumes To restore your WordPress volumes from a backup, follow these steps: Stop and remove your existing containers along with their volumes by running: docker-compose down --volumes Restore the WordPress and MariaDB volumes from your backup archive. Assuming you have a backup file named backup.tar.gz in the ./backups directory: docker-compose run --no-deps -v ./backups:/backups web bash -c \u0026#34;cd /var/www/html \u0026amp;\u0026amp; tar --strip 2 -zvxf /backups/backup.tar.gz backup/wordpress\u0026#34; docker-compose run --no-deps -v ./backups:/backups db bash -c \u0026#34;cd /var/lib/mysql \u0026amp;\u0026amp; tar --strip 2 -zvxf /backups/backup.tar.gz backup/mariadb\u0026#34; Bring up your Docker Compose stack again: docker-compose up -d Your WordPress site should now be restored from the backup. ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:2:1","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#step-1-restore-the-volumes"},{"categories":["Development"],"collections":null,"content":"Conclusion By following these steps and using the provided backup and restore scripts, you can easily backup and restore your WordPress website running in Docker Compose. This ensures that your website data is safe and can be quickly recovered in case of any issues. ","date":"16-09-2021","objectID":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/:2:2","tags":null,"title":"How to Backup and Restore WordPress with Docker Compose","uri":"/posts/development/how-to-backup-and-restore-wordpress-with-docker-compose/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Bash, you can trim whitespace characters from a variable using various methods, as shown in your code. Here\u0026rsquo;s a breakdown of the different approaches: Remove Leading and Trailing White Spaces: NEW_VARIABLE=\u0026#34;$(echo -e \u0026#34;${VARIABLE_NAME}\u0026#34; | tr -d \u0026#39;[:space:]\u0026#39;)\u0026#34; # NEW_VARIABLE=\u0026#39;aaa bbb\u0026#39; This method uses the tr command to delete all whitespace characters, both leading and trailing, in the variable VARIABLE_NAME. Remove Only Leading White Spaces: NEW_VARIABLE=\u0026#34;$(echo -e \u0026#34;${VARIABLE_NAME}\u0026#34; | sed -e \u0026#39;s/^[[:space:]]*//\u0026#39;)\u0026#34; # NEW_VARIABLE=\u0026#39;aaa bbb \u0026#39; Here, sed is used to remove only the leading whitespace characters from the variable. Remove Only Trailing White Spaces: NEW_VARIABLE=\u0026#34;$(echo -e \u0026#34;${VARIABLE_NAME}\u0026#34; | sed -e \u0026#39;s/[[:space:]]*$//\u0026#39;)\u0026#34; # NEW_VARIABLE=\u0026#39; aaa bbb\u0026#39; This approach utilizes sed to eliminate trailing whitespace characters from the variable. Remove All White Spaces (Leading, Trailing, and Inside): NEW_VARIABLE=\u0026#34;$(echo -e \u0026#34;${VARIABLE_NAME}\u0026#34; | sed -e \u0026#39;s/^[[:space:]]*//\u0026#39; -e \u0026#39;s/[[:space:]]*$//\u0026#39;)\u0026#34; # NEW_VARIABLE=\u0026#39;aaabbb\u0026#39; Here, sed is used twice. The first expression removes leading whitespaces, and the second one removes trailing whitespaces, effectively removing all whitespaces, including those inside the variable. These methods provide flexibility depending on your specific requirements for whitespace removal in Bash. ","date":"16-09-2021","objectID":"/posts/development/trim-the-whitespace-characters-from-a-bash-variable/:0:0","tags":null,"title":"Trim the whitespace characters from a Bash variable","uri":"/posts/development/trim-the-whitespace-characters-from-a-bash-variable/#"},{"categories":["Development"],"collections":null,"content":"If you need to change the domain URL of your WordPress site, you can use WP-CLI, a powerful command-line tool for managing WordPress. Changing the site URL is a common task, especially when migrating your site to a new domain. Here\u0026rsquo;s a step-by-step guide on how to do it with WP-CLI. ","date":"15-09-2021","objectID":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/:0:0","tags":null,"title":"How to Change WordPress Site URL With WP-CLI","uri":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/#"},{"categories":["Development"],"collections":null,"content":"1. Backup Your WordPress Database Before making any changes, it\u0026rsquo;s crucial to create a backup of your WordPress database. This ensures that you can restore your site if anything goes wrong during the URL change process. To export your database, use the following WP-CLI command: wp db export This command will create a SQL file with your database content that you can later import if needed. To import the database backup, use the following command, replacing mydomain_dbname.sql with the actual name of your exported database file: wp db import mydomain_dbname.sql ","date":"15-09-2021","objectID":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/:0:1","tags":null,"title":"How to Change WordPress Site URL With WP-CLI","uri":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/#1-backup-your-wordpress-database"},{"categories":["Development"],"collections":null,"content":"2. Rename All Instances of the Old URL to the New URL Now that you\u0026rsquo;ve backed up your database, it\u0026rsquo;s time to change all instances of your old domain URL to the new one. To do this, you\u0026rsquo;ll use the wp search-replace command. Run the following commands to perform a dry run and then the actual URL replacement: Dry Run A dry run allows you to preview the changes without actually modifying the database. It\u0026rsquo;s a good practice to do this first to see what will be replaced: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; --dry-run Actual URL Replacement Once you\u0026rsquo;re satisfied with the dry run results, you can proceed with the actual URL replacement: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; Additionally, if you\u0026rsquo;re switching from HTTP to HTTPS, you should also replace any instances of the old HTTP URL with the new HTTPS URL. Run this command: wp search-replace \u0026#39;http://domain.com\u0026#39; \u0026#39;https://domain.com\u0026#39; By following these steps, you can safely change your WordPress site\u0026rsquo;s domain URL using WP-CLI. Remember to make backups and exercise caution when running the wp search-replace command to avoid unintentional changes. ","date":"15-09-2021","objectID":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/:0:2","tags":null,"title":"How to Change WordPress Site URL With WP-CLI","uri":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/#2-rename-all-instances-of-the-old-url-to-the-new-url"},{"categories":["Development"],"collections":null,"content":"2. Rename All Instances of the Old URL to the New URL Now that you\u0026rsquo;ve backed up your database, it\u0026rsquo;s time to change all instances of your old domain URL to the new one. To do this, you\u0026rsquo;ll use the wp search-replace command. Run the following commands to perform a dry run and then the actual URL replacement: Dry Run A dry run allows you to preview the changes without actually modifying the database. It\u0026rsquo;s a good practice to do this first to see what will be replaced: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; --dry-run Actual URL Replacement Once you\u0026rsquo;re satisfied with the dry run results, you can proceed with the actual URL replacement: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; Additionally, if you\u0026rsquo;re switching from HTTP to HTTPS, you should also replace any instances of the old HTTP URL with the new HTTPS URL. Run this command: wp search-replace \u0026#39;http://domain.com\u0026#39; \u0026#39;https://domain.com\u0026#39; By following these steps, you can safely change your WordPress site\u0026rsquo;s domain URL using WP-CLI. Remember to make backups and exercise caution when running the wp search-replace command to avoid unintentional changes. ","date":"15-09-2021","objectID":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/:0:2","tags":null,"title":"How to Change WordPress Site URL With WP-CLI","uri":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/#dry-run"},{"categories":["Development"],"collections":null,"content":"2. Rename All Instances of the Old URL to the New URL Now that you\u0026rsquo;ve backed up your database, it\u0026rsquo;s time to change all instances of your old domain URL to the new one. To do this, you\u0026rsquo;ll use the wp search-replace command. Run the following commands to perform a dry run and then the actual URL replacement: Dry Run A dry run allows you to preview the changes without actually modifying the database. It\u0026rsquo;s a good practice to do this first to see what will be replaced: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; --dry-run Actual URL Replacement Once you\u0026rsquo;re satisfied with the dry run results, you can proceed with the actual URL replacement: wp search-replace \u0026#39;olddomain.com\u0026#39; \u0026#39;newdomain.com\u0026#39; Additionally, if you\u0026rsquo;re switching from HTTP to HTTPS, you should also replace any instances of the old HTTP URL with the new HTTPS URL. Run this command: wp search-replace \u0026#39;http://domain.com\u0026#39; \u0026#39;https://domain.com\u0026#39; By following these steps, you can safely change your WordPress site\u0026rsquo;s domain URL using WP-CLI. Remember to make backups and exercise caution when running the wp search-replace command to avoid unintentional changes. ","date":"15-09-2021","objectID":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/:0:2","tags":null,"title":"How to Change WordPress Site URL With WP-CLI","uri":"/posts/development/how-to-change-wordpress-site-url-with-wp-cli/#actual-url-replacement"},{"categories":["Development"],"collections":null,"content":"When working with Docker containers on Windows or Mac, you might need to access the IP address of the host machine from within the container. Docker provides two convenient DNS names that you can use to achieve this: host.docker.internal and gateway.docker.internal. ","date":"14-09-2021","objectID":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/:0:0","tags":null,"title":"Accessing Windows or Mac Host IP in Docker","uri":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/#"},{"categories":["Development"],"collections":null,"content":"Using host.docker.internal The DNS name host.docker.internal allows you to access the IP address of the host machine from within a Docker container. This is particularly useful when you need to communicate with services running on the host, such as a development server. Here\u0026rsquo;s how you can use it in your Docker container: $ docker run -it --rm alpine ping host.docker.internal In this example, the alpine container is used to run the ping command against host.docker.internal. This will resolve to the IP address of the host machine. ","date":"14-09-2021","objectID":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/:0:1","tags":null,"title":"Accessing Windows or Mac Host IP in Docker","uri":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/#using-hostdockerinternal"},{"categories":["Development"],"collections":null,"content":"Using gateway.docker.internal The DNS name gateway.docker.internal is used to access the default gateway IP address of the host machine from within a Docker container. This can be helpful when you need to communicate with external services or the internet from within the container. Here\u0026rsquo;s how you can use it in your Docker container: $ docker run -it --rm alpine ping gateway.docker.internal In this example, the alpine container is used to run the ping command against gateway.docker.internal. This will resolve to the IP address of the default gateway of the host machine. ","date":"14-09-2021","objectID":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/:0:2","tags":null,"title":"Accessing Windows or Mac Host IP in Docker","uri":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/#using-gatewaydockerinternal"},{"categories":["Development"],"collections":null,"content":"Limitations and Considerations Keep in mind the following limitations and considerations when using these DNS names: Supported Operating Systems: These DNS names (host.docker.internal and gateway.docker.internal) are specific to Docker Desktop on Windows and Docker Desktop on Mac. They may not work in other Docker environments. Container Network Mode: These DNS names work best when the Docker container is running in the default bridge network mode. If you\u0026rsquo;re using a custom network mode or network configuration, the behavior might vary. Firewall and Security Software: Make sure that any firewall or security software on the host machine doesn\u0026rsquo;t block the communication between the container and the host. DNS Resolution: Docker relies on DNS resolution to map these names to IP addresses. Ensure that DNS resolution is functioning properly on your system. ","date":"14-09-2021","objectID":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/:0:3","tags":null,"title":"Accessing Windows or Mac Host IP in Docker","uri":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/#limitations-and-considerations"},{"categories":["Development"],"collections":null,"content":"Summary When working with Docker on Windows or Mac, you can use the DNS names host.docker.internal and gateway.docker.internal to access the IP address of the host machine or its default gateway from within a Docker container. These DNS names provide a convenient way to enable communication between your containers and the host system. Just remember the limitations and considerations mentioned above to ensure smooth usage. Remember to adjust your Docker run commands or configurations to suit your specific use case and requirements. ","date":"14-09-2021","objectID":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/:0:4","tags":null,"title":"Accessing Windows or Mac Host IP in Docker","uri":"/posts/development/accessing-windows-or-mac-host-ip-in-docker/#summary"},{"categories":["Development"],"collections":null,"content":"You\u0026rsquo;re facing an issue with the whatsapp-web.js library when trying to reconnect after your MacBook goes to sleep. The solution you\u0026rsquo;ve provided involves using a remote browser via Docker to run Chrome and then connecting whatsapp-web.js to it using the browserWSEndpoint option. This is a workaround to ensure that the connection is maintained even after your MacBook wakes up from sleep mode. Here\u0026rsquo;s a breakdown of the steps and code you\u0026rsquo;ve provided: ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:0","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#"},{"categories":["Development"],"collections":null,"content":"Issue Description You mentioned that when you run whatsapp-web.js with Chromium and then put your MacBook to sleep, the application is unable to reconnect to Chromium when your MacBook wakes up. This results in the application staying idle even when new messages arrive. ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:1","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#issue-description"},{"categories":["Development"],"collections":null,"content":"Proposed Solution To solve this issue, you suggested using a remote browser by running Chrome in Docker and opening a port for Puppeteer to connect to. Here\u0026rsquo;s how you can set it up: Docker Compose Configuration (docker-compose.yml) version: \u0026#34;3\u0026#34; services: chrome: restart: always image: browserless/chrome:latest environment: - MAX_CONCURRENT_SESSIONS=1 app: restart: always build: . depends_on: - \u0026#34;chrome\u0026#34; In this configuration: You define two services, chrome and app. The chrome service uses the browserless/chrome:latest Docker image, ensuring that Chrome is available for your application to connect to. You set the environment variable MAX_CONCURRENT_SESSIONS to 1 to limit the number of concurrent browser sessions to one. app.js Code In your app.js file, you configure the whatsapp-web.js client to use the remote browser opened by Docker: const client = new Client({ puppeteer: { browserWSEndpoint: \u0026#39;ws://chrome:3000\u0026#39;, // Connect to the remote browser }, session: sessionCfg, // Your session configuration }); Here, you set the browserWSEndpoint option to the WebSocket address of the remote Chrome instance running in Docker, which allows whatsapp-web.js to connect to it. ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:2","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#proposed-solution"},{"categories":["Development"],"collections":null,"content":"Proposed Solution To solve this issue, you suggested using a remote browser by running Chrome in Docker and opening a port for Puppeteer to connect to. Here\u0026rsquo;s how you can set it up: Docker Compose Configuration (docker-compose.yml) version: \u0026#34;3\u0026#34; services: chrome: restart: always image: browserless/chrome:latest environment: - MAX_CONCURRENT_SESSIONS=1 app: restart: always build: . depends_on: - \u0026#34;chrome\u0026#34; In this configuration: You define two services, chrome and app. The chrome service uses the browserless/chrome:latest Docker image, ensuring that Chrome is available for your application to connect to. You set the environment variable MAX_CONCURRENT_SESSIONS to 1 to limit the number of concurrent browser sessions to one. app.js Code In your app.js file, you configure the whatsapp-web.js client to use the remote browser opened by Docker: const client = new Client({ puppeteer: { browserWSEndpoint: \u0026#39;ws://chrome:3000\u0026#39;, // Connect to the remote browser }, session: sessionCfg, // Your session configuration }); Here, you set the browserWSEndpoint option to the WebSocket address of the remote Chrome instance running in Docker, which allows whatsapp-web.js to connect to it. ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:2","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#docker-compose-configuration-docker-composeyml"},{"categories":["Development"],"collections":null,"content":"Proposed Solution To solve this issue, you suggested using a remote browser by running Chrome in Docker and opening a port for Puppeteer to connect to. Here\u0026rsquo;s how you can set it up: Docker Compose Configuration (docker-compose.yml) version: \u0026#34;3\u0026#34; services: chrome: restart: always image: browserless/chrome:latest environment: - MAX_CONCURRENT_SESSIONS=1 app: restart: always build: . depends_on: - \u0026#34;chrome\u0026#34; In this configuration: You define two services, chrome and app. The chrome service uses the browserless/chrome:latest Docker image, ensuring that Chrome is available for your application to connect to. You set the environment variable MAX_CONCURRENT_SESSIONS to 1 to limit the number of concurrent browser sessions to one. app.js Code In your app.js file, you configure the whatsapp-web.js client to use the remote browser opened by Docker: const client = new Client({ puppeteer: { browserWSEndpoint: \u0026#39;ws://chrome:3000\u0026#39;, // Connect to the remote browser }, session: sessionCfg, // Your session configuration }); Here, you set the browserWSEndpoint option to the WebSocket address of the remote Chrome instance running in Docker, which allows whatsapp-web.js to connect to it. ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:2","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#appjs-code"},{"categories":["Development"],"collections":null,"content":"Summary By using this Docker-based approach, you can ensure that whatsapp-web.js maintains a stable connection to Chromium even after your MacBook wakes up from sleep mode. This should prevent the issue of the application becoming idle when new messages arrive. ","date":"26-08-2021","objectID":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/:0:3","tags":null,"title":"Whatsapp Webjs Wont Reconnect After Macbook Sleep","uri":"/posts/development/whatsapp-webjs-wont-reconnect-after-macbook-sleep/#summary"},{"categories":["Development"],"collections":null,"content":"Sure! The provided code snippet is meant to modify the Tomcat server information displayed when the server starts up. This can be achieved using Docker Compose by modifying the docker-compose.yml file as follows: version: \u0026#39;3\u0026#39; services: tomcat: image: tomcat:latest ports: - \u0026#34;8080:8080\u0026#34; command: \u0026gt; bash -c \u0026#34;mkdir -p /usr/local/tomcat/lib/org/apache/catalina/util/ \u0026amp;\u0026amp; echo server.info=PlantUML \u0026gt; /usr/local/tomcat/lib/org/apache/catalina/util/ServerInfo.properties \u0026amp;\u0026amp; catalina.sh run\u0026#34; In this Docker Compose configuration: We define a service named tomcat based on the tomcat:latest image. We map port 8080 from the host to port 8080 in the container to access the Tomcat server. The command section is where the custom startup command is defined. The \u0026gt; character is used to allow multiline commands. The custom command consists of the following steps: It creates the necessary directory structure using mkdir -p to ensure the directory exists. It uses echo to write the server.info property with the value PlantUML to the ServerInfo.properties file in the specified directory. Finally, it runs the catalina.sh run command to start the Tomcat server. By specifying this custom command, you are modifying the Tomcat server information to display \u0026ldquo;PlantUML\u0026rdquo; as the server version when it starts up. Make sure to place this modified docker-compose.yml file in the same directory where you run the docker-compose up command to start the Tomcat container with the specified custom command. ","date":"25-08-2021","objectID":"/posts/development/docker-compose-docker-compose-yml-command-multiline/:0:0","tags":null,"title":"Docker Compose docker-compose-yml Command Multiline","uri":"/posts/development/docker-compose-docker-compose-yml-command-multiline/#"},{"categories":["Development"],"collections":null,"content":"To run a PlantUML server on login for your Mac, you can follow these steps using Automator. This will ensure that the PlantUML server starts automatically every time you log in. Step 1: Install PlantUML First, make sure you have PlantUML installed on your Mac using Homebrew. You\u0026rsquo;ve already mentioned the command: brew install plant-uml Step 2: Create an Automator App Open \u0026ldquo;Automator\u0026rdquo; on your Mac. You can find it by searching for it in Spotlight or in the Applications folder. When Automator opens, it will ask you to choose a type for your document. Select \u0026ldquo;Application\u0026rdquo; and click \u0026ldquo;Choose.\u0026rdquo; In the left-hand Actions pane, search for \u0026ldquo;Run Shell Script\u0026rdquo; and drag it to the right-hand workflow pane. In the \u0026ldquo;Run Shell Script\u0026rdquo; action, set \u0026ldquo;Shell\u0026rdquo; to \u0026ldquo;/bin/bash\u0026rdquo; (or \u0026ldquo;/bin/zsh\u0026rdquo; if you use Zsh), and \u0026ldquo;Pass input\u0026rdquo; to \u0026ldquo;as arguments.\u0026rdquo; In the script box, add the following command to start the PlantUML server with nohup and send its output to /dev/null: nohup /usr/local/bin/plantuml -picoweb:9191 \u0026gt; /dev/null 2\u0026gt;\u0026amp;1 \u0026amp; Save your Automator workflow as an application by clicking \u0026ldquo;File\u0026rdquo; \u0026gt; \u0026ldquo;Save\u0026rdquo; and giving it a name like \u0026ldquo;PlantUMLServer.\u0026rdquo; Step 3: Add Automator App to Startup Apps Open \u0026ldquo;System Preferences\u0026rdquo; on your Mac. Click on \u0026ldquo;Users \u0026amp; Groups.\u0026rdquo; In the left-hand sidebar, select your user account. Click the \u0026ldquo;Login Items\u0026rdquo; tab on the right-hand side. Click the \u0026ldquo;+\u0026rdquo; button below the list of login items. Navigate to where you saved the \u0026ldquo;PlantUMLServer\u0026rdquo; Automator application and select it. Click the \u0026ldquo;Add\u0026rdquo; button to add it to the list of login items. Now, every time you log in to your Mac, the PlantUML server should automatically start and run in the background. Please note that the paths and file names mentioned here are based on the information you provided. Make sure they match your system configuration. Additionally, ensure that you have the necessary permissions to run the PlantUML server on port 9191 if it requires elevated privileges. ","date":"25-08-2021","objectID":"/posts/development/mac-run-plantuml-server-on-login/:0:0","tags":null,"title":"Mac Run PlantUML Server on Login","uri":"/posts/development/mac-run-plantuml-server-on-login/#"},{"categories":["Development"],"collections":null,"content":"In this article, we\u0026rsquo;ll explore how to run background processes using Automator and shell scripts on a Mac. We\u0026rsquo;ll compare two methods, one using a basic shell command and the other employing the nohup command for more robust background process management. ","date":"25-08-2021","objectID":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/:0:0","tags":null,"title":"Running Background Processes with Automator and Shell Scripts on Mac","uri":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/#"},{"categories":["Development"],"collections":null,"content":"Method 1: Using a Basic Shell Command To run a background process using a basic shell command in Automator, follow these steps: Create a New Automator Workflow: Open Automator (you can find it in the Applications folder). Choose \u0026ldquo;File\u0026rdquo; \u0026gt; \u0026ldquo;New\u0026rdquo; to create a new document. Select \u0026ldquo;Workflow\u0026rdquo; as the document type. Add a \u0026ldquo;Run Shell Script\u0026rdquo; Action: In the library on the left-hand side, search for \u0026ldquo;Run Shell Script.\u0026rdquo; Drag and drop the \u0026ldquo;Run Shell Script\u0026rdquo; action into the workflow pane on the right. Configure the Shell Script Action: In the \u0026ldquo;Run Shell Script\u0026rdquo; action, set \u0026ldquo;Pass input\u0026rdquo; to \u0026ldquo;as arguments\u0026rdquo; to allow passing any potential input. In the script area, enter the command you want to run in the background. For example: /usr/local/bin/plantuml -picoweb:9191 \u0026amp; The \u0026amp; at the end of the command instructs the shell to run the process in the background. Save and Run the Workflow: Save the workflow with an appropriate name, such as \u0026ldquo;Run PlantUML Background.\u0026rdquo; To run the workflow, simply double-click it or select \u0026ldquo;Run\u0026rdquo; from the Automator toolbar. This method will execute the provided command in the background. ","date":"25-08-2021","objectID":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/:1:0","tags":null,"title":"Running Background Processes with Automator and Shell Scripts on Mac","uri":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/#method-1-using-a-basic-shell-command"},{"categories":["Development"],"collections":null,"content":"Method 2: Using nohup for Robust Background Execution If you want more control and robustness when running background processes, you can use the nohup command. This ensures that the process continues running even if you close the terminal or log out. Here\u0026rsquo;s how to do it: Create a New Automator Workflow (Same as Step 1 in Method 1). Add a \u0026ldquo;Run Shell Script\u0026rdquo; Action (Same as Step 2 in Method 1). Configure the Shell Script Action: In the \u0026ldquo;Run Shell Script\u0026rdquo; action, set \u0026ldquo;Pass input\u0026rdquo; to \u0026ldquo;as arguments.\u0026rdquo; In the script area, use the following command: nohup /usr/local/bin/plantuml -picoweb:9191 \u0026gt; /dev/null 2\u0026gt;\u0026amp;1 \u0026amp; The nohup command prevents the process from being terminated when you log out or close the terminal. The \u0026gt; /dev/null 2\u0026gt;\u0026amp;1 part redirects output and error messages to /dev/null, effectively silencing them. Save and Run the Workflow (Same as Step 4 in Method 1). This method is recommended if you want the background process to persist even after you log out or close the terminal. In conclusion, you can choose between the basic shell command for simple background tasks or the nohup command for more robust background process management when using Automator on your Mac. ","date":"25-08-2021","objectID":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/:2:0","tags":null,"title":"Running Background Processes with Automator and Shell Scripts on Mac","uri":"/posts/development/running-background-processes-with-automator-and-shell-scripts-on-mac/#method-2-using-nohup-for-robust-background-execution"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re using a Java Jar Swing application on a Mac and find that you can\u0026rsquo;t copy text from a textbox using the standard keyboard shortcuts, such as CTRL+C for copy and Command+V for paste, there might be several reasons for this issue. Below, we\u0026rsquo;ll explore some potential solutions to help you resolve this problem. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:0:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#"},{"categories":["Development"],"collections":null,"content":"1. Check the Key Bindings Java Swing applications often rely on custom key bindings for certain actions. It\u0026rsquo;s possible that the copy and paste actions in your application have been customized or overridden. To check this: Look for any custom key bindings in the application\u0026rsquo;s source code or configuration files. Make sure that CTRL+C and Command+C are properly mapped to the copy action, and CTRL+V and Command+V are mapped to the paste action. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:1:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#1-check-the-key-bindings"},{"categories":["Development"],"collections":null,"content":"2. Verify the Focus Textboxes and other components in Swing applications need to have focus for keyboard shortcuts to work correctly. If the textbox doesn\u0026rsquo;t have focus, the copy and paste shortcuts won\u0026rsquo;t work. To ensure focus is on the textbox: Click inside the textbox to make sure it\u0026rsquo;s selected. If you\u0026rsquo;re navigating through multiple textboxes, use the Tab key to cycle through them until you reach the desired one. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:2:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#2-verify-the-focus"},{"categories":["Development"],"collections":null,"content":"3. Check for macOS Compatibility Sometimes, Java Swing applications may not be fully compatible with macOS. To address this: Ensure that you\u0026rsquo;re using an up-to-date version of Java that\u0026rsquo;s compatible with your macOS version. Consider running the application in a compatibility mode or using a Java runtime environment (JRE) that\u0026rsquo;s optimized for macOS. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:3:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#3-check-for-macos-compatibility"},{"categories":["Development"],"collections":null,"content":"4. Try Different Key Combinations In some cases, the keyboard shortcuts for copy and paste might be different in macOS. Instead of CTRL+C and CTRL+V, try using Command+C and Command+V exclusively for copying and pasting within the Java Swing application. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:4:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#4-try-different-key-combinations"},{"categories":["Development"],"collections":null,"content":"5. Check for System-Level Key Conflicts There might be conflicts with system-level keyboard shortcuts that prevent the Java application from receiving the copy and paste commands. To check for this: Review your macOS keyboard shortcut settings and ensure there are no conflicts with the Java application\u0026rsquo;s keybindings. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:5:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#5-check-for-system-level-key-conflicts"},{"categories":["Development"],"collections":null,"content":"6. Test with Other Applications To narrow down the issue, test the copy and paste functionality in other applications on your Mac. This will help determine if the problem is specific to the Java Swing application or if it\u0026rsquo;s a system-wide issue. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:6:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#6-test-with-other-applications"},{"categories":["Development"],"collections":null,"content":"7. Update or Modify the Application If you have access to the source code of the Java Swing application, you can consider updating or modifying it to ensure compatibility with macOS and the desired keyboard shortcuts. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:7:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#7-update-or-modify-the-application"},{"categories":["Development"],"collections":null,"content":"8. Contact the Developer or Community If all else fails, consider reaching out to the developer of the Java Swing application or the community that supports it. They may have insights or solutions specific to that application. In summary, the inability to copy text from a Java Jar Swing application on a Mac using the standard keyboard shortcuts can be due to various factors, including custom key bindings, focus issues, compatibility problems, or conflicts with system-level shortcuts. By following the steps outlined above, you should be able to diagnose and resolve the issue. ","date":"18-08-2021","objectID":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/:8:0","tags":null,"title":"Cant Copy Textbox from Java Jar Swing on Mac","uri":"/posts/development/cant-copy-textbox-from-java-jar-swing-on-mac/#8-contact-the-developer-or-community"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re experiencing unresponsiveness in open applications due to Lulu blocking certain processes, particularly on macOS Big Sur (version 11.5), this article provides a step-by-step solution to address the issue. Lulu is a firewall application that helps you monitor and control outgoing network connections. It may occasionally cause applications like Sketch, Adobe XD, and Adobe Photoshop to become unresponsive when attempting to connect to specific DNS IP addresses. ","date":"15-08-2021","objectID":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/:0:0","tags":["mac"],"title":"Resolving Unresponsiveness of Open Apps Blocked by Lulu on macOS Big Sur","uri":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/#"},{"categories":["Software"],"collections":null,"content":"Solution 1: Avoid Blocking Specific DNS IP Addresses Launch Lulu: Open the Lulu application on your Mac. Access Preferences: Click on \u0026ldquo;Lulu\u0026rdquo; in the menu bar and choose \u0026ldquo;Preferences.\u0026rdquo; Examine Blocked DNS IP Addresses: In the Preferences window, navigate to the \u0026ldquo;Blocked Hosts\u0026rdquo; tab. Look for any DNS IP addresses that you have manually added, such as 192.168.0.1, 8.8.8.8, or 8.8.4.4. Remove DNS IP Addresses: To prevent potential conflicts and unresponsiveness issues, select the DNS IP addresses you added and click the \u0026ldquo;Remove\u0026rdquo; (minus) button. Apply Changes: After removing the DNS IP addresses, click the \u0026ldquo;Save Changes\u0026rdquo; button in the Preferences window. Restart Applications: Quit and relaunch the applications that were previously unresponsive. Check if they now function as expected. ","date":"15-08-2021","objectID":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/:1:0","tags":["mac"],"title":"Resolving Unresponsiveness of Open Apps Blocked by Lulu on macOS Big Sur","uri":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/#solution-1-avoid-blocking-specific-dns-ip-addresses"},{"categories":["Software"],"collections":null,"content":"Solution 2: Allowing Any Process to Use Specific DNS IP Open Lulu Preferences: Launch the Lulu application and access its Preferences as described in steps 1 and 2 of Solution 1. Configure Custom Rules: In the Preferences window, switch to the \u0026ldquo;Rules\u0026rdquo; tab. Add New Rule: Click the \u0026ldquo;+\u0026rdquo; (plus) button to create a new rule. Define Rule Parameters: In the \u0026ldquo;Process\u0026rdquo; dropdown, select \u0026ldquo;Any.\u0026rdquo; In the \u0026ldquo;Protocol\u0026rdquo; dropdown, choose \u0026ldquo;DNS (UDP \u0026amp; TCP).\u0026rdquo; In the \u0026ldquo;Direction\u0026rdquo; dropdown, select \u0026ldquo;Outgoing.\u0026rdquo; In the \u0026ldquo;Action\u0026rdquo; dropdown, choose \u0026ldquo;Allow.\u0026rdquo; In the \u0026ldquo;Remote IP\u0026rdquo; field, enter the DNS IP address you want to allow (e.g., 192.168.0.1). Save and Apply: Click the \u0026ldquo;Save Changes\u0026rdquo; button to save the new rule. Restart Applications: Quit and relaunch the previously unresponsive applications to see if they are now functioning correctly. ","date":"15-08-2021","objectID":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/:2:0","tags":["mac"],"title":"Resolving Unresponsiveness of Open Apps Blocked by Lulu on macOS Big Sur","uri":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/#solution-2-allowing-any-process-to-use-specific-dns-ip"},{"categories":["Software"],"collections":null,"content":"Conclusion By following the provided solutions, you should be able to address the issue of unresponsiveness in open applications caused by Lulu\u0026rsquo;s process blocking on macOS Big Sur. If you encounter further problems or have specific DNS IP addresses that need to be allowed, you can adjust Lulu\u0026rsquo;s settings accordingly. Remember that Lulu\u0026rsquo;s firewall settings impact outgoing network connections, and careful configuration can ensure both security and optimal application performance. ","date":"15-08-2021","objectID":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/:3:0","tags":["mac"],"title":"Resolving Unresponsiveness of Open Apps Blocked by Lulu on macOS Big Sur","uri":"/posts/software/resolving-unresponsiveness-of-open-apps-blocked-by-lulu-on-macos-big-sur/#conclusion"},{"categories":["Development"],"collections":null,"content":"Smooth scrolling is a popular web design feature that enhances user experience by creating a fluid transition when navigating through a web page. However, there are instances where smooth scrolling might not work as expected, especially on specific platforms or browsers. This article addresses the issue of smooth scroll behavior not working in Elementor WordPress on iOS Safari and provides a solution using the smoothscroll-polyfill library. ","date":"03-08-2021","objectID":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/:0:0","tags":["wordpress"],"title":"Fixing Smooth Scroll Behavior Issue in Elementor WordPress on iOS Safari","uri":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/#"},{"categories":["Development"],"collections":null,"content":"Identifying the Issue Smooth scrolling can sometimes behave inconsistently across different browsers and devices. One common issue is when smooth scroll doesn\u0026rsquo;t work on iOS Safari when using the Elementor page builder in WordPress. Users may notice a lack of smooth animation when scrolling, which can impact the overall user experience of the website. ","date":"03-08-2021","objectID":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/:1:0","tags":["wordpress"],"title":"Fixing Smooth Scroll Behavior Issue in Elementor WordPress on iOS Safari","uri":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/#identifying-the-issue"},{"categories":["Development"],"collections":null,"content":"The Solution: Using smoothscroll-polyfill To address the smooth scroll behavior issue on iOS Safari while using Elementor in WordPress, we can integrate the smoothscroll-polyfill library into the website. This library provides a consistent and smooth scrolling experience across various browsers, including those that do not natively support smooth scrolling. Follow these steps to implement the solution: Access Your WordPress Dashboard: Log in to your WordPress admin panel. Edit the Desired Page with Elementor: Locate the page where you want to enable smooth scrolling using Elementor and click on the \u0026ldquo;Edit with Elementor\u0026rdquo; button. Add a New Section for Custom Scripts: Within the Elementor editor, add a new section where you can insert the custom scripts. To do this, click on the \u0026ldquo;+\u0026rdquo; icon to add a new section. Add the smoothscroll-polyfill Script: Inside the newly added section, add an HTML widget. Then, paste the following code into the HTML widget to include the smoothscroll-polyfill script. \u0026lt;script defer src=\u0026#34;https://unpkg.com/smoothscroll-polyfill@0.4.4/dist/smoothscroll.min.js\u0026#34;\u0026gt;\u0026lt;/script\u0026gt; \u0026lt;script\u0026gt; // Run smoothscroll-polyfill when the page is fully loaded window.addEventListener(\u0026#39;load\u0026#39;, function() { smoothscroll.polyfill(); }); \u0026lt;/script\u0026gt; Update and Save: After adding the script, click the \u0026ldquo;Update\u0026rdquo; or \u0026ldquo;Save\u0026rdquo; button to save your changes. Preview and Test: To ensure that the smooth scroll behavior is working as expected, you can preview the page or view it live. Open the page on an iOS Safari browser and test the scrolling behavior. ","date":"03-08-2021","objectID":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/:2:0","tags":["wordpress"],"title":"Fixing Smooth Scroll Behavior Issue in Elementor WordPress on iOS Safari","uri":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/#the-solution-using-smoothscroll-polyfill"},{"categories":["Development"],"collections":null,"content":"Conclusion By following these steps and integrating the smoothscroll-polyfill library into your Elementor-powered WordPress website, you can address the issue of smooth scroll behavior not working on iOS Safari. This solution provides a consistent and smooth scrolling experience for all visitors, enhancing the overall usability and appeal of your website. Smooth scrolling is a valuable feature that contributes to a seamless and enjoyable browsing experience. With the help of the smoothscroll-polyfill library, you can ensure that your Elementor WordPress site delivers a smooth and engaging experience to users, regardless of the browser or device they are using. ","date":"03-08-2021","objectID":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/:3:0","tags":["wordpress"],"title":"Fixing Smooth Scroll Behavior Issue in Elementor WordPress on iOS Safari","uri":"/posts/development/fixing-smooth-scroll-behavior-issue-in-elementor-wordpress-on-ios-safari/#conclusion"},{"categories":["Development"],"collections":null,"content":"To speak text selections in different languages on iOS, you can create a shortcut using the Shortcuts app. Here\u0026rsquo;s a step-by-step guide on how to do it: ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:0:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#"},{"categories":["Development"],"collections":null,"content":"Create a Shortcut Open the Shortcuts App: If you don\u0026rsquo;t have it installed, you can download it from the App Store. Create a New Shortcut: Tap the \u0026ldquo;+\u0026rdquo; button to create a new shortcut. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:1:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#create-a-shortcut"},{"categories":["Development"],"collections":null,"content":"Accept Text Add Action: Tap the \u0026ldquo;Add Action\u0026rdquo; button to add actions to your shortcut. Search for \u0026ldquo;Get Clipboard\u0026rdquo;: Use the search bar to find the \u0026ldquo;Get Clipboard\u0026rdquo; action and add it to your shortcut. This action will capture the text you\u0026rsquo;ve copied to the clipboard. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:2:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#accept-text"},{"categories":["Development"],"collections":null,"content":"Speak Shortcut Input Add Action: Tap \u0026ldquo;Add Action\u0026rdquo; again to add another action. Search for \u0026ldquo;Speak Text\u0026rdquo;: Use the search bar to find the \u0026ldquo;Speak Text\u0026rdquo; action and add it to your shortcut. Configure \u0026ldquo;Speak Text\u0026rdquo; Action: In the \u0026ldquo;Text\u0026rdquo; field, tap on \u0026ldquo;Clipboard\u0026rdquo; to select the text you captured earlier. You can also configure the voice and language in this action by tapping on the respective options and selecting your preferred language and voice. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:3:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#speak-shortcut-input"},{"categories":["Development"],"collections":null,"content":"Click Show More Add Action: Tap \u0026ldquo;Add Action\u0026rdquo; once more. Search for \u0026ldquo;Show More\u0026rdquo;: Use the search bar to find the \u0026ldquo;Show More\u0026rdquo; action and add it to your shortcut. This action will allow you to customize your shortcut further. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:4:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#click-show-more"},{"categories":["Development"],"collections":null,"content":"Change Language and Voice of your choice Add Action: Tap \u0026ldquo;Add Action\u0026rdquo; again. Search for \u0026ldquo;Set Language\u0026rdquo;: Use the search bar to find the \u0026ldquo;Set Language\u0026rdquo; action and add it to your shortcut. Configure \u0026ldquo;Set Language\u0026rdquo; Action: Tap on \u0026ldquo;Language\u0026rdquo; to select the language you want to switch to. You can also tap on \u0026ldquo;Voice\u0026rdquo; to choose a specific voice for the selected language. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:5:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#change-language-and-voice-of-your-choice"},{"categories":["Development"],"collections":null,"content":"Try select text and click share Now, whenever you select text in any app and tap the \u0026ldquo;Share\u0026rdquo; option, you should see your shortcut in the list of actions. Select the Shortcut: Choose the shortcut you created from the list. That\u0026rsquo;s it! Your shortcut is ready to use. When you select text and share it, then choose your shortcut, it will speak the selected text in the language and voice you configured. You can also add more actions to your shortcut if you want to perform additional tasks or customize it further. ","date":"19-07-2021","objectID":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/:6:0","tags":null,"title":"How to Speak Text Selection With Difference Language on iOS","uri":"/posts/development/how-to-speak-text-selection-with-difference-language-on-ios/#try-select-text-and-click-share"},{"categories":["Development"],"collections":null,"content":"When running the apachectl configtest command in Apache, you may encounter the following error message: AH00558: apache2: Could not reliably determine the server\u0026#39;s fully qualified domain name, using domain.com. Set the \u0026#39;ServerName\u0026#39; directive globally to suppress this messageThis error occurs when Apache is unable to determine the fully qualified domain name (FQDN) of your server. It\u0026rsquo;s a warning rather than a critical error, but it\u0026rsquo;s a good practice to address it to avoid potential issues. Here\u0026rsquo;s how you can resolve this issue: ","date":"13-07-2021","objectID":"/posts/development/troubleshooting-apache-error-ah00558/:0:0","tags":null,"title":"Troubleshooting Apache Error Ah00558","uri":"/posts/development/troubleshooting-apache-error-ah00558/#"},{"categories":["Development"],"collections":null,"content":"Solution Edit the Apache Configuration File: Open your Apache configuration file using a text editor. The location of this file may vary depending on your system, but it\u0026rsquo;s often located in the /etc/apache2 directory. Use the following command to open the file with sudo privileges: sudo nano /etc/apache2/apache2.conf Set the ServerName Directive: Inside the configuration file, locate the ServerName directive. If it doesn\u0026rsquo;t exist, you can add it at the end of the file. Set it to your server\u0026rsquo;s fully qualified domain name (FQDN) or a reasonable substitute. For example: ServerName your-server-fqdn.com Replace your-server-fqdn.com with the actual FQDN of your server. If you don\u0026rsquo;t have an FQDN, you can use the server\u0026rsquo;s IP address: ServerName your-server-ip-address Save and Exit: Save your changes by pressing Ctrl + O, then press Enter. Exit the text editor by pressing Ctrl + X. Test the Configuration: After making the changes, it\u0026rsquo;s a good practice to test the Apache configuration again using the apachectl configtest command: sudo apachectl configtest If there are no syntax errors, you should see a message like Syntax OK. Restart Apache: Finally, restart Apache to apply the changes: sudo systemctl restart apache2 The error should be resolved, and Apache will use the specified ServerName directive to determine the server\u0026rsquo;s FQDN, suppressing the warning message. By following these steps, you can address the AH00558 error and ensure that your Apache server is configured correctly. ","date":"13-07-2021","objectID":"/posts/development/troubleshooting-apache-error-ah00558/:1:0","tags":null,"title":"Troubleshooting Apache Error Ah00558","uri":"/posts/development/troubleshooting-apache-error-ah00558/#solution"},{"categories":["Software"],"collections":null,"content":"GitLab Web IDE is a powerful web-based integrated development environment that allows users to edit, commit, and manage their GitLab projects directly from the browser. However, some users may encounter an error while trying to edit files in the Web IDE when accessing GitLab through an Apache reverse proxy. This article provides a step-by-step solution to resolve the \u0026ldquo;Error while loading the project data. Please try again\u0026rdquo; message that appears in this scenario. ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:0:0","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#"},{"categories":["Software"],"collections":null,"content":"Problem Description When attempting to use the GitLab Web IDE on a private GitLab instance accessed through an Apache reverse proxy, users encounter the error message \u0026ldquo;Error while loading the project data. Please try again.\u0026rdquo; ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:1:0","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#problem-description"},{"categories":["Software"],"collections":null,"content":"Solution To resolve this issue, you need to configure the Apache reverse proxy to properly handle WebSocket connections for the GitLab Web IDE. Follow the steps below to set up the Apache reverse proxy for WebSocket: ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:2:0","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#solution"},{"categories":["Software"],"collections":null,"content":"Step 1: Edit Apache Configuration Open the Apache configuration file on your server. Depending on your distribution, the file may be located in different directories such as /etc/apache2/apache2.conf or /etc/httpd/httpd.conf. You may also have separate configuration files in the /etc/apache2/sites-available/ or /etc/httpd/conf.d/ directory. ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:3:0","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#step-1-edit-apache-configuration"},{"categories":["Software"],"collections":null,"content":"Step 2: Enable \u0026lsquo;AllowEncodedSlashes\u0026rsquo; Directive Locate the \u0026lt;VirtualHost\u0026gt; block for your GitLab instance in the Apache configuration file. Inside the block, add or modify the AllowEncodedSlashes directive and set it to NoDecode. This step is essential for handling GitLab URLs correctly. Example \u0026lt;VirtualHost *:80\u0026gt; ServerName gitlab.example.com ... AllowEncodedSlashes NoDecode ... \u0026lt;/VirtualHost\u0026gt; ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:3:1","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#step-2-enable-allowencodedslashes-directive"},{"categories":["Software"],"collections":null,"content":"Step 2: Enable \u0026lsquo;AllowEncodedSlashes\u0026rsquo; Directive Locate the \u0026lt;VirtualHost\u0026gt; block for your GitLab instance in the Apache configuration file. Inside the block, add or modify the AllowEncodedSlashes directive and set it to NoDecode. This step is essential for handling GitLab URLs correctly. Example \u0026lt;VirtualHost *:80\u0026gt; ServerName gitlab.example.com ... AllowEncodedSlashes NoDecode ... \u0026lt;/VirtualHost\u0026gt; ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:3:1","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#example"},{"categories":["Software"],"collections":null,"content":"Step 3: Enable WebSocket Proxy Within the same \u0026lt;VirtualHost\u0026gt; block, enable the WebSocket proxy by adding the following Rewrite rules. These rules will redirect WebSocket requests to the GitLab server: \u0026lt;VirtualHost *:80\u0026gt; ServerName gitlab.example.com ... AllowEncodedSlashes NoDecode RewriteEngine On # Handle WebSocket upgrade requests RewriteCond %{HTTP:Upgrade} =websocket [NC] RewriteRule /(.*) ws://localhost:30080/$1 [P,QSA,NE] # Handle other HTTP requests RewriteCond %{HTTP:Upgrade} !=websocket [NC] RewriteRule /(.*) http://localhost:30080/$1 [P,QSA,NE] ... \u0026lt;/VirtualHost\u0026gt; Make sure to replace gitlab.example.com with the domain or IP address used to access your GitLab instance. The WebSocket connections will be redirected to the GitLab server on port 30080. ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:3:2","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#step-3-enable-websocket-proxy"},{"categories":["Software"],"collections":null,"content":"Step 4: Restart Apache After making the changes, save the Apache configuration file and restart Apache to apply the modifications: # For Ubuntu/Debian sudo service apache2 restart # For CentOS/RHEL sudo systemctl restart httpd ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:3:3","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#step-4-restart-apache"},{"categories":["Software"],"collections":null,"content":"Conclusion By following the steps outlined in this article, you should be able to resolve the \u0026ldquo;Error while loading the project data. Please try again\u0026rdquo; message encountered when using the GitLab Web IDE on a private GitLab instance accessed through an Apache reverse proxy. The setup allows WebSocket connections to be correctly handled, ensuring a smooth and error-free experience with the Web IDE. Remember to always make backups of your configuration files before making changes and verify your setup\u0026rsquo;s correctness to avoid potential issues. ","date":"12-07-2021","objectID":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/:4:0","tags":["gitlab"],"title":"Troubleshooting GitLab Web IDE Error When Trying to Edit a File on a Private GitLab Using Apache Reverse Proxy","uri":"/posts/software/troubleshooting-gitlab-web-ide-error-when-trying-to-edit-a-file-on-a-private-gitlab-using-apache-reverse-proxy/#conclusion"},{"categories":["Software"],"collections":null,"content":"Snap Camera is a popular desktop application that allows users to add various fun and creative filters to their webcam feed. However, some users may encounter issues when using Snap Camera with Zoom on a Mac. One common problem is that the Snap Camera application fails to work correctly or does not show up as an option within Zoom\u0026rsquo;s video settings. In this troubleshooting guide, we\u0026rsquo;ll go through the steps to resolve the issue of Snap Camera not working on Zoom for Mac users. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:0:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Check Snap Camera Compatibility Before proceeding with any troubleshooting steps, make sure that your version of Snap Camera is compatible with your Mac\u0026rsquo;s operating system. Visit the Snap Camera website (https://snapcamera.snapchat.com/) and check for any updates or compatibility requirements. Download and install the latest version of Snap Camera if necessary. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:1:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#step-1-check-snap-camera-compatibility"},{"categories":["Software"],"collections":null,"content":"Step 2: Verify Zoom and Snap Camera Settings Open both Zoom and Snap Camera applications on your Mac. In the Zoom app, click on your profile picture and navigate to \u0026ldquo;Settings.\u0026rdquo; Within the settings, select \u0026ldquo;Video\u0026rdquo; from the left-hand menu. Under the \u0026ldquo;Camera\u0026rdquo; section, check if Snap Camera is listed as an option. If it\u0026rsquo;s not present, move to Step 3. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:2:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#step-2-verify-zoom-and-snap-camera-settings"},{"categories":["Software"],"collections":null,"content":"Step 3: Remove Zoom.app Signature Sometimes, Zoom\u0026rsquo;s app signature may interfere with Snap Camera, preventing it from being recognized. By removing the signature, you can potentially resolve this issue. Here\u0026rsquo;s how to do it: Open the Terminal application on your Mac. (You can find it in Applications \u0026gt; Utilities \u0026gt; Terminal). Copy and paste the following command into the Terminal window: sudo codesign --remove-signature /Applications/zoom.us.appPress Enter and enter your Mac user password when prompted. Note that when typing your password, you won\u0026rsquo;t see any visual feedback (no asterisks or characters). Restart your Mac to apply the changes. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:3:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#step-3-remove-zoomapp-signature"},{"categories":["Software"],"collections":null,"content":"Step 4: Reinstall Snap Camera and Zoom If the issue persists, try reinstalling both Snap Camera and Zoom to ensure that you have the latest versions installed. Here\u0026rsquo;s how: Uninstall Snap Camera: Open Finder and navigate to Applications. Locate Snap Camera, drag it to the Trash, and empty the Trash. Uninstall Zoom: Open Finder and navigate to Applications. Locate Zoom, drag it to the Trash, and empty the Trash. Download and install the latest versions of Snap Camera and Zoom from their respective official websites. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:4:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#step-4-reinstall-snap-camera-and-zoom"},{"categories":["Software"],"collections":null,"content":"Step 5: Contact Support If the problem still persists after following all the above steps, it\u0026rsquo;s possible that there may be a more specific issue with your system or software configuration. In this case, it is recommended to contact the support teams of both Zoom and Snap Camera to get further assistance. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:5:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#step-5-contact-support"},{"categories":["Software"],"collections":null,"content":"Conclusion By following these troubleshooting steps, you can potentially resolve the issue of Snap Camera not working on Zoom on your Mac. Checking compatibility, verifying settings, removing Zoom\u0026rsquo;s app signature, and reinstalling both applications can help you enjoy using Snap Camera\u0026rsquo;s fun filters during your Zoom meetings. ","date":"19-06-2021","objectID":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/:6:0","tags":["mac"],"title":"Troubleshooting Snap Camera Not Working on Zoom on Mac","uri":"/posts/software/troubleshooting-snap-camera-not-working-on-zoom-on-mac/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you find that your Windows computer screen is turning off too quickly and you want to prevent it from doing so, you can use the powercfg command to override the default power settings. This can be particularly useful if you\u0026rsquo;re watching a movie, giving a presentation, or working on a task that requires your screen to remain active for an extended period of time. Here\u0026rsquo;s how you can create, delete, and list power request overrides using the powercfg command. ","date":"07-06-2021","objectID":"/posts/development/how-to-prevent-your-screen-from-turning-off-on-windows/:0:0","tags":null,"title":"How to Prevent Your Screen from Turning Off on Windows","uri":"/posts/development/how-to-prevent-your-screen-from-turning-off-on-windows/#"},{"categories":["Development"],"collections":null,"content":"Creating a Power Request Override To create a power request override and prevent your screen from turning off, follow these steps: 1. Open Command Prompt with administrative privileges. You can do this by searching for \u0026#34;Command Prompt\u0026#34; in the Windows Start menu, right-clicking it, and selecting \u0026#34;Run as administrator.\u0026#34; 2. In the Command Prompt window, type the following command: ```markdown powercfg /REQUESTSOVERRIDE process barrierc.exe display system This command creates a power request override for a process named \u0026ldquo;barrierc.exe\u0026rdquo; and specifies that it needs to keep the display and system active. Press Enter to execute the command. You should see a confirmation message indicating that the power request override was successfully created. ### Deleting a Power Request Override If you want to remove a power request override and allow your screen to turn off based on the default power settings, follow these steps: ```markdown 1. Open Command Prompt with administrative privileges as mentioned earlier. 2. In the Command Prompt window, type the following command to delete the power request override for \u0026#34;barrierc.exe\u0026#34;: ```markdown powercfg /REQUESTSOVERRIDE process barrierc.exeThis command removes the power request override associated with the specified process. Press Enter to execute the command. You should see a confirmation message indicating that the power request override was successfully deleted. ### Listing Power Request Overrides If you want to view a list of all power request overrides currently set on your system, use the following command: ```markdown powercfg /REQUESTSOVERRIDEThis command will display a list of all processes and their corresponding power request overrides. You can use this list to check which processes are preventing your screen from turning off. By using these commands, you can have more control over your Windows power settings and ensure that your screen stays active when you need it to, without interfering with your computer\u0026rsquo;s default behavior. ","date":"07-06-2021","objectID":"/posts/development/how-to-prevent-your-screen-from-turning-off-on-windows/:0:1","tags":null,"title":"How to Prevent Your Screen from Turning Off on Windows","uri":"/posts/development/how-to-prevent-your-screen-from-turning-off-on-windows/#creating-a-power-request-override"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re encountering the \u0026ldquo;Postfix Error: bind 0.0.0.0 port 25: Address already in use\u0026rdquo; error message, it means that the default SMTP port 25 is already in use by another service on your server. To resolve this issue, you can change the port that Postfix listens on for incoming email connections. Here\u0026rsquo;s how to do it: Open your master.cf configuration file using a text editor. You can use the vim editor as mentioned in your provided instructions: vim /etc/postfix/master.cf In the master.cf file, look for the following line: smtp inet n - n - - - smtpd Replace \u0026ldquo;smtp\u0026rdquo; with a new port number of your choice. In this example, we\u0026rsquo;ll use port 26, but you can choose a different port if you prefer: 26 inet n - n - - - smtpd Save the changes and exit the text editor. Now, you need to tell Postfix to listen on this new port. Open your main.cf configuration file in a text editor: vim /etc/postfix/main.cf Find the smtpd_port configuration option and set it to the new port number (in this case, 26): smtpd_port = 26 Save the changes and exit the text editor. Restart the Postfix service to apply the changes: systemctl restart postfix Your Postfix mail server should now be listening on the new port (in this example, port 26) for incoming email connections. Be sure to update your email client settings to use this new port when configuring email accounts. Remember that when you change the SMTP port, you\u0026rsquo;ll need to inform your email clients (e.g., Outlook, Thunderbird) about the new port when setting up or modifying email account configurations. Additionally, ensure that any firewall or security group settings allow traffic on the new SMTP port if you\u0026rsquo;re running Postfix on a server with firewall rules. ","date":"06-06-2021","objectID":"/posts/development/how-to-fix-postfix-error-bind-0000-port-25-address-already-in-use/:0:0","tags":null,"title":"How To Fix Postfix Error Bind 0000 Port 25 Address Already In Use","uri":"/posts/development/how-to-fix-postfix-error-bind-0000-port-25-address-already-in-use/#"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the steps to configure Postfix on an Ubuntu system to send emails through your Gmail account. This can be useful for various purposes, such as sending automated emails from your server. Follow these steps: ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:0","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Required Packages First, you need to install the necessary packages. Open a terminal and run the following command: sudo apt update sudo apt install postfix libsasl2-modules mailutils During the installation process, you will be prompted to configure Postfix. Select \u0026ldquo;Internet Site\u0026rdquo; and press Enter. Enter your domain name (or set your localhost name if you don\u0026rsquo;t have a domain). ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:1","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#step-1-install-required-packages"},{"categories":["Development"],"collections":null,"content":"Step 2: Create the SASL Password File Now, you need to create a file to store your Gmail account credentials securely. Open the SASL password file for editing with your preferred text editor. In this example, we\u0026rsquo;ll use Vim: sudo vim /etc/postfix/sasl/sasl_passwd Add the following line to the file, replacing username@gmail.com with your Gmail email address and password with your Gmail password: [smtp.gmail.com]:587 username@gmail.com:password Save and close the file. Next, restrict the permissions of this file to ensure it\u0026rsquo;s not accessible to unauthorized users: sudo chmod 0600 /etc/postfix/sasl/sasl_passwd Then, create a hash database from this file: sudo postmap /etc/postfix/sasl/sasl_passwd ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:2","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#step-2-create-the-sasl-password-file"},{"categories":["Development"],"collections":null,"content":"Step 3: Configure Postfix Open the Postfix configuration file for editing: sudo vim /etc/postfix/main.cf Find the line that starts with relayhost = and modify it as follows: relayhost = [smtp.gmail.com]:587 Add the following lines to enable SASL authentication, disallow anonymous authentication, specify the location of the SASL password file, enable STARTTLS encryption, and specify the location of CA certificates: # Enable SASL authentication smtp_sasl_auth_enable = yes # Disallow methods that allow anonymous authentication smtp_sasl_security_options = noanonymous # Location of sasl_passwd smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd # Enable STARTTLS encryption smtp_tls_security_level = encrypt # Location of CA certificates smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt Save and close the file. ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:3","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#step-3-configure-postfix"},{"categories":["Development"],"collections":null,"content":"Step 4: Restart Postfix To apply the changes you made to the Postfix configuration, restart the Postfix service: sudo systemctl restart postfix ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:4","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#step-4-restart-postfix"},{"categories":["Development"],"collections":null,"content":"Sending Gmail Emails with Postfix You have now configured Postfix to send emails through your Gmail account. You can use the mail command or any other email-sending application on your Ubuntu server to send emails via your Gmail account through the Postfix configuration you set up. Remember to use this configuration responsibly, and do not share your Gmail credentials or leave them exposed on your server. ","date":"06-06-2021","objectID":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/:0:5","tags":null,"title":"How to Send Gmail Email With Postfix on Ubuntu","uri":"/posts/development/how-to-send-gmail-email-with-postfix-on-ubuntu/#sending-gmail-emails-with-postfix"},{"categories":["Development"],"collections":null,"content":"If you want to send an email via the console on your Mac using a custom \u0026ldquo;From\u0026rdquo; address with Gmail, you can use the sendmail command. Gmail\u0026rsquo;s SMTP servers can be configured to allow sending emails from custom addresses. Here\u0026rsquo;s how you can send an email from the console with a custom \u0026ldquo;From\u0026rdquo; address using sendmail: ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:0:0","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites You should have a Gmail account set up. Make sure you have the sendmail command installed on your Mac. You can install it using Homebrew if you don\u0026rsquo;t have it already: brew install sendmail ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:1:0","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Sending the Email ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:2:0","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#sending-the-email"},{"categories":["Development"],"collections":null,"content":"Method 1: Using printf and Pipe This method allows you to set the \u0026ldquo;From\u0026rdquo; address and other email details using the printf command and pipe it to sendmail. Replace the placeholders with your own email addresses and content. printf \u0026#34;From: Your Name \u0026lt;your.email@gmail.com\u0026gt;\\nTo: Recipient Name \u0026lt;recipient.email@example.com\u0026gt;\\nSubject: Your Subject\\n\\nEmail body text.\u0026#34; | sendmail -t Replace the following placeholders: Your Name with your name. your.email@gmail.com with your Gmail email address. Recipient Name with the recipient\u0026rsquo;s name. recipient.email@example.com with the recipient\u0026rsquo;s email address. Your Subject with the email subject. Email body text with the content of your email. ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:2:1","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#method-1-using-printf-and-pipe"},{"categories":["Development"],"collections":null,"content":"Method 2: Using Flags Alternatively, you can use flags to specify the \u0026ldquo;From\u0026rdquo; address and other email details when using sendmail. Replace the placeholders with your own email addresses and content. sendmail -F \u0026#34;Your Name\u0026#34; -f your.email@gmail.com -t \u0026lt;\u0026lt;EOF To: Recipient Name \u0026lt;recipient.email@example.com\u0026gt; Subject: Your Subject Email body text. EOF Replace the following placeholders: Your Name with your name. your.email@gmail.com with your Gmail email address. Recipient Name with the recipient\u0026rsquo;s name. recipient.email@example.com with the recipient\u0026rsquo;s email address. Your Subject with the email subject. Email body text with the content of your email. ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:2:2","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#method-2-using-flags"},{"categories":["Development"],"collections":null,"content":"Notes Gmail might require you to allow less secure apps to access your account for this to work. You may need to configure your Gmail account to allow \u0026ldquo;less secure apps\u0026rdquo; to access it. Be cautious when using this setting, as it can make your account less secure. An alternative is to generate an \u0026ldquo;App Password\u0026rdquo; for this purpose. Make sure to replace all placeholders with your actual information. It\u0026rsquo;s important to test this with your Gmail account and review Gmail\u0026rsquo;s security settings to ensure that your account is protected. ","date":"05-06-2021","objectID":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/:3:0","tags":null,"title":"Sending Email via Console with Custom From Address on Mac Using Gmail","uri":"/posts/development/sending-email-via-console-with-custom-from-address-on-mac-using-gmail/#notes"},{"categories":["Development"],"collections":null,"content":"If you want to hear your own microphone voice on your Mac, you can easily do so using QuickTime Player. This can be useful for monitoring your audio input or for various recording and communication purposes. Follow these steps to set it up: Open QuickTime Player: You can find QuickTime Player in your Applications folder or by using Spotlight Search (press Command + Space, then start typing \u0026ldquo;QuickTime Player\u0026rdquo; and press Enter). Create a New Audio Recording: Go to the \u0026ldquo;File\u0026rdquo; menu at the top left corner of your screen. Select \u0026ldquo;New Audio Recording\u0026rdquo; from the dropdown menu. Adjust the Volume Feedback: After you\u0026rsquo;ve started the audio recording, you\u0026rsquo;ll see a small audio recording window. On this window, you\u0026rsquo;ll notice a volume slider. This slider controls the feedback volume, allowing you to hear your own microphone input. Slide the volume feedback slider to the right to increase the volume until you can hear your microphone input clearly. Test Your Microphone: Start speaking or making sounds into your microphone. You should now be able to hear your voice through your Mac\u0026rsquo;s speakers or headphones, depending on your audio output settings. Adjust Volume as Needed: If the volume is too loud or too quiet, you can fine-tune it using the volume feedback slider. Move the slider to the left to decrease the volume or to the right to increase it until it\u0026rsquo;s at a comfortable level for you. Recording (Optional): If you also want to record your microphone input while monitoring it, you can click the red record button in the QuickTime Player audio recording window. When you\u0026rsquo;re done recording, click the stop button to save the recording. Close QuickTime Player: Once you\u0026rsquo;re finished with monitoring or recording your microphone, you can close QuickTime Player. That\u0026rsquo;s it! You should now be able to hear your own microphone voice on your Mac using QuickTime Player. This can be especially helpful for checking your microphone\u0026rsquo;s quality, ensuring it\u0026rsquo;s working correctly, or for various audio-related tasks. ","date":"04-06-2021","objectID":"/posts/development/how-to-hear-your-own-microphone-voice-on-mac/:0:0","tags":null,"title":"How to Hear Your Own Microphone Voice on Mac","uri":"/posts/development/how-to-hear-your-own-microphone-voice-on-mac/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re concerned about privacy or simply want more control over your contacts, hosting your own Apple Contacts account using Nextcloud is a great option. This allows you to store and sync your contacts securely on your own server. Here\u0026rsquo;s a step-by-step guide on how to set it up: ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:0:0","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Nextcloud Server: You\u0026rsquo;ll need a Nextcloud server set up. You can either host your own Nextcloud server or use a Nextcloud hosting provider. Nextcloud Contacts App: Ensure that you have the \u0026ldquo;Contacts\u0026rdquo; app installed and enabled on your Nextcloud server. Mac and iOS Devices: You\u0026rsquo;ll need Apple devices such as a Mac computer and an iPhone or iPad to complete the setup. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:1:0","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Steps to Host Your Apple Contacts Account on Mac and iOS using Nextcloud ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:2:0","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#steps-to-host-your-apple-contacts-account-on-mac-and-ios-using-nextcloud"},{"categories":["Development"],"collections":null,"content":"1. Configure Nextcloud Contacts App Open your Nextcloud server in a web browser. Log in to your Nextcloud account. Navigate to the \u0026ldquo;Contacts\u0026rdquo; app. You can usually find it on the bottom of the navigation panel. In the \u0026ldquo;Contacts\u0026rdquo; app, look for the WebDAV URL. This URL will be used to connect your Apple devices to your Nextcloud contacts. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:2:1","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#1-configure-nextcloud-contacts-app"},{"categories":["Development"],"collections":null,"content":"2. Set Up on Mac On your Mac computer, go to \u0026ldquo;System Preferences.\u0026rdquo; Click on \u0026ldquo;Internet Accounts.\u0026rdquo; Click the \u0026ldquo;+\u0026rdquo; button to add a new account. Choose \u0026ldquo;CardDAV Account\u0026rdquo; from the list of account types. Fill in the following details: Server Address: Enter the WebDAV URL from your Nextcloud Contacts app. User Name: Your Nextcloud username. Password: Your Nextcloud password. Click \u0026ldquo;Sign In\u0026rdquo; to connect your Mac\u0026rsquo;s Contacts app to your Nextcloud account. Your Mac will now sync your Nextcloud contacts to the Contacts app. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:2:2","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#2-set-up-on-mac"},{"categories":["Development"],"collections":null,"content":"3. Set Up on iOS On your iOS device (iPhone or iPad), go to \u0026ldquo;Settings.\u0026rdquo; Scroll down and tap on \u0026ldquo;Contacts.\u0026rdquo; Tap \u0026ldquo;Accounts\u0026rdquo; and then \u0026ldquo;Add Account.\u0026rdquo; Choose \u0026ldquo;CardDAV Account.\u0026rdquo; Fill in the following details: Server: Enter the WebDAV URL from your Nextcloud Contacts app. User Name: Your Nextcloud username. Password: Your Nextcloud password. Description: Give your account a name (e.g., \u0026ldquo;Nextcloud Contacts\u0026rdquo;). Tap \u0026ldquo;Next\u0026rdquo; or \u0026ldquo;Sign In\u0026rdquo; to complete the setup. Your iOS device will now sync your Nextcloud contacts with the built-in Contacts app. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:2:3","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#3-set-up-on-ios"},{"categories":["Development"],"collections":null,"content":"4. Verify Sync After completing the setup on both your Mac and iOS device, wait for a few moments to allow the initial sync to take place. Open your Contacts app on both devices and check if your Nextcloud contacts are visible and up-to-date. Now, you have successfully set up your own Apple Contacts account hosted on your Nextcloud server. Your contacts will be stored securely on your server, and you can access and manage them on your Mac and iOS devices. Please note that you\u0026rsquo;ll need to ensure your Nextcloud server is always accessible and properly maintained for continuous synchronization between your devices. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/:2:4","tags":null,"title":"How to Host Your Own Apple Contacts Account on Mac and iOS using Nextcloud","uri":"/posts/development/how-to-host-your-own-apple-contacts-account-on-mac-and-ios-using-nextcloud/#4-verify-sync"},{"categories":["Development"],"collections":null,"content":"If you want to take control of your Apple Reminders and host them on your own server, you can do so by using Nextcloud, a popular open-source cloud platform. This guide will walk you through the process of setting up your own Apple Reminders service on your Mac and iOS devices. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:0:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you get started, make sure you have the following: A Mac running macOS. An iOS device (iPhone or iPad). Nextcloud installed and set up on your server. You can install Nextcloud on a compatible web server or use a Nextcloud hosting service. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:1:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Nextcloud If you haven\u0026rsquo;t already, install Nextcloud on your server by following the official installation instructions on the Nextcloud website. This typically involves setting up a web server (e.g., Apache or Nginx), a database (e.g., MySQL or SQLite), and PHP on your server. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:2:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-1-install-nextcloud"},{"categories":["Development"],"collections":null,"content":"Step 2: Enable the Task App on Nextcloud Once Nextcloud is installed and configured, you\u0026rsquo;ll need to enable the Task app within Nextcloud. This will allow you to manage tasks and reminders. Log in to your Nextcloud instance using a web browser. Click on the \u0026ldquo;Apps\u0026rdquo; menu on the upper-right corner of the screen. Search for the \u0026ldquo;Tasks\u0026rdquo; app and click \u0026ldquo;Enable.\u0026rdquo; ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:3:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-2-enable-the-task-app-on-nextcloud"},{"categories":["Development"],"collections":null,"content":"Step 3: Enable the Calendar App on Nextcloud To synchronize reminders with your Apple devices, you\u0026rsquo;ll also need to enable the Calendar app in Nextcloud. From the same \u0026ldquo;Apps\u0026rdquo; menu in Nextcloud, search for the \u0026ldquo;Calendar\u0026rdquo; app and click \u0026ldquo;Enable.\u0026rdquo; ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:4:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-3-enable-the-calendar-app-on-nextcloud"},{"categories":["Development"],"collections":null,"content":"Step 4: Set Up WebDAV WebDAV (Web Distributed Authoring and Versioning) is a protocol that allows you to access and edit files on a web server. We will use WebDAV to sync your Nextcloud tasks and reminders with your Mac and iOS devices. In your Nextcloud instance, go to the \u0026ldquo;Calendar\u0026rdquo; app. In the navigation panel on the left, you should see a list of calendars. Click the three dots (⋮) next to your desired calendar and select \u0026ldquo;Settings.\u0026rdquo; In the \u0026ldquo;Calendar Settings\u0026rdquo; page, look for the \u0026ldquo;WebDAV\u0026rdquo; link. Copy the WebDAV URL to your clipboard. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:5:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-4-set-up-webdav"},{"categories":["Development"],"collections":null,"content":"Step 5: Set Up on Mac Now, let\u0026rsquo;s configure your Mac to sync with your Nextcloud calendar and tasks. Open the \u0026ldquo;Calendar\u0026rdquo; app on your Mac. Click \u0026ldquo;Calendar\u0026rdquo; in the menu bar and select \u0026ldquo;Accounts\u0026hellip;\u0026rdquo; Click the \u0026ldquo;+\u0026rdquo; button to add a new account. Choose \u0026ldquo;Other CalDAV Account\u0026rdquo; and click \u0026ldquo;Continue.\u0026rdquo; Enter your Nextcloud username and password, and paste the WebDAV URL you copied earlier in the \u0026ldquo;Server Address\u0026rdquo; field. Click \u0026ldquo;Create.\u0026rdquo; Your Nextcloud calendar and tasks should now be available in the Calendar app on your Mac. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:6:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-5-set-up-on-mac"},{"categories":["Development"],"collections":null,"content":"Step 6: Set Up on iOS Finally, let\u0026rsquo;s configure your iOS device to sync with your Nextcloud calendar and tasks. Open the \u0026ldquo;Settings\u0026rdquo; app on your iOS device. Scroll down and select \u0026ldquo;Calendar.\u0026rdquo; Under \u0026ldquo;Accounts,\u0026rdquo; tap \u0026ldquo;Add Account.\u0026rdquo; Choose \u0026ldquo;Other\u0026rdquo; at the bottom. Select \u0026ldquo;Add CalDAV Account.\u0026rdquo; Fill in your Nextcloud username, password, and the WebDAV URL you copied earlier. Tap \u0026ldquo;Next.\u0026rdquo; Your Nextcloud calendar and tasks should now be accessible through the default Calendar app on your iOS device. That\u0026rsquo;s it! You\u0026rsquo;ve successfully set up your own Apple Reminders service hosted on Nextcloud and synchronized it with your Mac and iOS devices. You can now create, edit, and manage your reminders securely on your own server. ","date":"04-06-2021","objectID":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/:7:0","tags":null,"title":"How to Host Your Own Apple Reminders on Mac and iOS","uri":"/posts/development/how-to-host-your-own-apple-reminders-on-mac-and-ios/#step-6-set-up-on-ios"},{"categories":["Development"],"collections":null,"content":"Logging script output is a common practice in shell scripting to keep track of what a script is doing, especially during debugging or troubleshooting. In this guide, we\u0026rsquo;ll show you how to log all script output inside the script itself using the tee command and redirection. Here\u0026rsquo;s a simple example of a script that echoes some text and logs its output: #!/bin/bash # Define the path to the debug log file DEBUGLOG=\u0026#34;/path/to/debug.log\u0026#34; # Redirect all output (stdout and stderr) to the debug log using tee { echo \u0026#34;test\u0026#34; echo \u0026#34;test 2\u0026#34; echo \u0026#34;test 3\u0026#34; } 2\u0026gt;\u0026amp;1 | tee -a \u0026#34;$DEBUGLOG\u0026#34; # Additional script commands go here In this script: We specify the path to the debug log file using the DEBUGLOG variable. Make sure to replace /path/to/debug.log with the actual path and filename where you want to store the log. Inside the script block enclosed by {} braces, we have some sample commands (echo statements) that generate output. You can replace these with your own script logic. The 2\u0026gt;\u0026amp;1 redirects the standard error (file descriptor 2) to the same location as standard output (file descriptor 1). This ensures that both stdout and stderr are captured and logged. The tee command is used to duplicate the output stream. It reads from the previous command (the block of echo statements) and simultaneously writes to both the terminal (stdout) and the specified log file. The -a option is used with tee to append to the log file if it already exists. After logging the output, you can continue with the rest of your script logic. To use this script: Save the script to a file (e.g., my_script.sh). Make the script executable by running chmod +x my_script.sh. Run the script using ./my_script.sh. The script\u0026rsquo;s output will be displayed on the terminal, and a copy of the output will be appended to the specified log file (/path/to/debug.log). Now, whenever you execute the script, all of its output will be logged in the specified log file for later analysis or debugging. ","date":"03-06-2021","objectID":"/posts/development/how-to-log-all-script-output-inside-the-script-itself/:0:0","tags":null,"title":"How to Log All Script Output Inside the Script Itself","uri":"/posts/development/how-to-log-all-script-output-inside-the-script-itself/#"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the process of sending an email using Gmail as the SMTP server and customizing the \u0026ldquo;From\u0026rdquo; address using the GNU Mail command-line utility on a Ubuntu-based system. This can be useful if you want to send emails from a specific address other than your Gmail account. ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:0:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following prerequisites: A Ubuntu-based Linux system. A Gmail account from which you want to send emails. Internet connectivity. ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:1:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Required Packages Open a terminal and install the necessary packages: ```bash sudo apt install ssmtp mailutils ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:2:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#step-1-install-required-packages"},{"categories":["Development"],"collections":null,"content":"Step 2: Configure ssmtp Edit the ssmtp configuration file to use your Gmail account for sending emails. You can use the text editor of your choice. In this example, we\u0026rsquo;ll use vim: sudo vim /etc/ssmtp/ssmtp.conf Uncomment the following line to enable overriding the \u0026ldquo;From\u0026rdquo; address: ```plaintext FromLineOverride=YES Add the following lines to configure Gmail as the SMTP server: ```plaintext AuthUser=YOUREMAIL@gmail.com AuthPass=YOURPASSWORD FromLineOverride=YES mailhub=smtp.gmail.com:587 UseSTARTTLS=YES Make sure to replace YOUREMAIL@gmail.com and YOURPASSWORD with your actual Gmail email address and password. Save and exit the configuration file. ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:3:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#step-2-configure-ssmtp"},{"categories":["Development"],"collections":null,"content":"Step 3: Configure .mailrc Install the bsd-mailx package to enable GNU Mail to read the .mailrc configuration file: sudo apt install bsd-mailx Create or edit the .mailrc file in your home directory to set the default \u0026ldquo;From\u0026rdquo; address for outgoing emails. In this example, we\u0026rsquo;re setting it to \u0026ldquo;No Reply no.reply@example.com\u0026rdquo;: ```bash echo \u0026#39;set from=\u0026#34;No Reply \u0026lt;no.reply@example.com\u0026gt;\u0026#34;\u0026#39; \u0026gt;\u0026gt; ~/.mailrc ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:4:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#step-3-configure-mailrc"},{"categories":["Development"],"collections":null,"content":"Step 4: Test Sending an Email You can now test sending an email using the mail command. Here are a couple of examples: ```bash echo \u0026#34;This is a test\u0026#34; | mail -s \u0026#34;Test\u0026#34; admin@example.com In this example, replace admin@example.com with the recipient\u0026rsquo;s email address. ```bash echo \u0026#34;Tester\u0026#34; | mail -r \u0026#34;No Reply \u0026lt;no.reply@example.com\u0026gt;\u0026#34; -s \u0026#34;Test\u0026#34; admin@example.com This command sends an email with the custom \u0026ldquo;From\u0026rdquo; address specified in the .mailrc file. That\u0026rsquo;s it! You\u0026rsquo;ve successfully configured and tested sending emails using Gmail as the SMTP server and customizing the \u0026ldquo;From\u0026rdquo; address using GNU Mail on your Ubuntu-based system. You can now use these steps to send emails from your desired address using the mail command. ","date":"03-06-2021","objectID":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/:5:0","tags":null,"title":"Sending Email with Gmail and Custom From Address Using GNU Mail","uri":"/posts/development/sending-email-with-gmail-and-custom-from-address-using-gnu-mail/#step-4-test-sending-an-email"},{"categories":["Development"],"collections":null,"content":"In a Markdown article, you can explain how to run a command in a different folder using the cd command in Unix-like operating systems. Here\u0026rsquo;s how you can structure your article: Sometimes, when working in a Unix-like operating system (such as Linux or macOS), you may need to execute a command from a different directory than your current one. This can be done using the cd command along with the desired command you want to run. In this article, we\u0026rsquo;ll explore how to run a command in a different folder. ","date":"01-06-2021","objectID":"/posts/development/running-a-command-in-a-different-folder/:0:0","tags":null,"title":"Running a Command in a Different Folder","uri":"/posts/development/running-a-command-in-a-different-folder/#"},{"categories":["Development"],"collections":null,"content":"Using the cd Command The cd command is commonly used to change the current working directory in a Unix-like shell. To run a command in a different folder, you can use a combination of cd and the command you want to execute. Here\u0026rsquo;s the basic syntax: (cd /path/to/directory \u0026amp;\u0026amp; your-command) (cd /path/to/directory) changes the current directory to the specified path. \u0026amp;\u0026amp; is used to execute the command on the right side only if the cd command succeeds. your-command represents the command you want to run in the new directory. ","date":"01-06-2021","objectID":"/posts/development/running-a-command-in-a-different-folder/:1:0","tags":null,"title":"Running a Command in a Different Folder","uri":"/posts/development/running-a-command-in-a-different-folder/#using-the-cd-command"},{"categories":["Development"],"collections":null,"content":"Example 1: Running a Command in a Different Folder Let\u0026rsquo;s say you want to run a Python script named my_script.py located in the /home/user/scripts directory. You can do this using the following command: (cd /home/user/scripts \u0026amp;\u0026amp; python my_script.py) This command will change the current directory to /home/user/scripts and then execute python my_script.py. ","date":"01-06-2021","objectID":"/posts/development/running-a-command-in-a-different-folder/:1:1","tags":null,"title":"Running a Command in a Different Folder","uri":"/posts/development/running-a-command-in-a-different-folder/#example-1-running-a-command-in-a-different-folder"},{"categories":["Development"],"collections":null,"content":"Example 2: Running a Special Command You may also need to run a special command that is not in your system\u0026rsquo;s PATH. In such cases, you can specify the full path to the command. Here\u0026rsquo;s an example: (cd /path/to/your/special/place \u0026amp;\u0026amp; /bin/your-special-command ARGS) This command changes the directory to /path/to/your/special/place and then runs /bin/your-special-command with the specified arguments (ARGS). ","date":"01-06-2021","objectID":"/posts/development/running-a-command-in-a-different-folder/:1:2","tags":null,"title":"Running a Command in a Different Folder","uri":"/posts/development/running-a-command-in-a-different-folder/#example-2-running-a-special-command"},{"categories":["Development"],"collections":null,"content":"Conclusion Running a command in a different folder can be accomplished using the cd command in combination with the command you want to execute. This technique is useful when you need to work with files or scripts in specific directories without having to navigate to those directories manually. Remember to replace /path/to/directory and your-command with the actual path and command you intend to use. ","date":"01-06-2021","objectID":"/posts/development/running-a-command-in-a-different-folder/:2:0","tags":null,"title":"Running a Command in a Different Folder","uri":"/posts/development/running-a-command-in-a-different-folder/#conclusion"},{"categories":["Development"],"collections":null,"content":"DNSCrypt is a protocol that encrypts DNS traffic between your computer and the DNS server, enhancing your online privacy and security. The bitbar-dnscrypt-proxy-switcher is a convenient way to control DNSCrypt using BitBar, a Mac menu bar application that allows you to add various plugins for quick access to information and functionality. In this guide, we\u0026rsquo;ll walk you through the steps to set up DNSCrypt using bitbar-dnscrypt-proxy-switcher. ","date":"21-05-2021","objectID":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/:0:0","tags":null,"title":"How to Set Up DNSCrypt Using bitbar-dnscrypt-proxy-switcher","uri":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following prerequisites installed on your Mac: Homebrew: If you don\u0026rsquo;t have Homebrew installed, you can install it by following the instructions at brew.sh. dnscrypt-proxy: Install dnscrypt-proxy using Homebrew: brew install dnscrypt-proxy BitBar: Install BitBar using Homebrew: brew install bitbar ","date":"21-05-2021","objectID":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/:0:1","tags":null,"title":"How to Set Up DNSCrypt Using bitbar-dnscrypt-proxy-switcher","uri":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Setting Up DNSCrypt with bitbar-dnscrypt-proxy-switcher Now, let\u0026rsquo;s set up DNSCrypt with bitbar-dnscrypt-proxy-switcher: Open BitBar: You can launch BitBar from your Applications folder or by searching for it in Spotlight. Create a BitBar Plugin Folder: If you don\u0026rsquo;t already have one, create a folder named BitBar in your home directory: mkdir ~/BitBar Set BitBar Plugin Folder: In BitBar, go to Preferences (click the BitBar icon in the menu bar and select Preferences), and set the Plugin Folder to the folder you created in the previous step (~/BitBar). Download bitbar-dnscrypt-proxy-switcher: In your terminal, navigate to the BitBar Plugin folder: cd ~/BitBar Download the bitbar-dnscrypt-proxy-switcher script using curl: curl -O https://raw.githubusercontent.com/jedisct1/bitbar-dnscrypt-proxy-switcher/master/dnscrypt-proxy-switcher.10s.shd Set File Permissions: Make the script executable: chmod +x dnscrypt-proxy-switcher.10s.shd Refresh BitBar: In BitBar, click the BitBar icon in the menu bar, and select Refresh All to load the dnscrypt-proxy-switcher plugin. Configure DNSCrypt: Click the BitBar icon again, and you should now see the dnscrypt-proxy-switcher plugin in the menu bar. Click on it to configure and enable DNSCrypt. You can select your preferred DNSCrypt provider and configure other options as needed. That\u0026rsquo;s it! You\u0026rsquo;ve successfully set up DNSCrypt using bitbar-dnscrypt-proxy-switcher. Your DNS queries will now be encrypted and more secure. You can easily switch between DNSCrypt providers and monitor the status of DNSCrypt directly from your Mac\u0026rsquo;s menu bar. Note: Keep in mind that DNSCrypt providers may change over time, so you can revisit the configuration to update your preferred provider if needed. ","date":"21-05-2021","objectID":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/:0:2","tags":null,"title":"How to Set Up DNSCrypt Using bitbar-dnscrypt-proxy-switcher","uri":"/posts/development/how-to-set-up-dnscrypt-using-bitbar-dnscrypt-proxy-switcher/#setting-up-dnscrypt-with-bitbar-dnscrypt-proxy-switcher"},{"categories":["Development"],"collections":null,"content":"If you want to disable Turbo Boost on your Mac without having to enter the administrator password every time, you can use Turbo Boost Switcher and set it up to run as a root or as a service. Here are the steps to achieve this: ","date":"12-05-2021","objectID":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/:0:0","tags":null,"title":"How To Disable Turbo Boost Without Asking Administrator Password On Turbo Boost Switcher","uri":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/#"},{"categories":["Development"],"collections":null,"content":"Run Turbo Boost Switcher As Root Open the Terminal application. You can find it in the Applications folder under Utilities, or you can use Spotlight by pressing Cmd + Space and typing \u0026ldquo;Terminal.\u0026rdquo; Run Turbo Boost Switcher as root by entering the following command: sudo -b /Applications/Turbo\\ Boost\\ Switcher.app/Contents/MacOS/Turbo\\ Boost\\ Switcher This command runs Turbo Boost Switcher with administrator privileges, which means you won\u0026rsquo;t have to enter the password each time you want to disable Turbo Boost. You should now see the Turbo Boost Switcher icon in the Menu Bar. You can click on it to toggle Turbo Boost on or off. ","date":"12-05-2021","objectID":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/:1:0","tags":null,"title":"How To Disable Turbo Boost Without Asking Administrator Password On Turbo Boost Switcher","uri":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/#run-turbo-boost-switcher-as-root"},{"categories":["Development"],"collections":null,"content":"Run Turbo Boost Switcher As a Service To run Turbo Boost Switcher as a service, you\u0026rsquo;ll need to create a service file. Open Terminal and create the file by entering the following command: sudo vim /Library/LaunchDaemons/com.turbo-boost-switcher.plist This will open a text editor in the Terminal. Add the following XML code to the file and save it. You can paste it into the text editor: \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE plist PUBLIC \u0026#34;-//Apple//DTD PLIST 1.0//EN\u0026#34; \u0026#34;http://www.apple.com/DTDs/PropertyList-1.0.dtd\u0026#34;\u0026gt; \u0026lt;plist version=\u0026#34;1.0\u0026#34;\u0026gt; \u0026lt;dict\u0026gt; \u0026lt;key\u0026gt;Label\u0026lt;/key\u0026gt; \u0026lt;string\u0026gt;com.turbo-boost-switcher\u0026lt;/string\u0026gt; \u0026lt;key\u0026gt;ProgramArguments\u0026lt;/key\u0026gt; \u0026lt;array\u0026gt; \u0026lt;string\u0026gt;/Applications/Turbo Boost Switcher.app/Contents/MacOS/Turbo Boost Switcher\u0026lt;/string\u0026gt; \u0026lt;/array\u0026gt; \u0026lt;key\u0026gt;RunAtLoad\u0026lt;/key\u0026gt; \u0026lt;true/\u0026gt; \u0026lt;key\u0026gt;KeepAlive\u0026lt;/key\u0026gt; \u0026lt;true/\u0026gt; \u0026lt;/dict\u0026gt; \u0026lt;/plist\u0026gt; This code defines a launch daemon for Turbo Boost Switcher, specifying that it should run at system startup and be kept alive. After saving the file, you need to unload and then load the service. Run these commands in Terminal: sudo launchctl unload /Library/LaunchDaemons/com.turbo-boost-switcher.plist sudo launchctl load /Library/LaunchDaemons/com.turbo-boost-switcher.plist This unloads and then loads the Turbo Boost Switcher service, making it available in the Menu Bar. You should now see the Turbo Boost Switcher icon in the Menu Bar. You can click on it to toggle Turbo Boost on or off without being prompted for an administrator password. By following these steps, you can either run Turbo Boost Switcher as root or as a service, allowing you to disable Turbo Boost without needing to enter the administrator password each time you use it. ","date":"12-05-2021","objectID":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/:2:0","tags":null,"title":"How To Disable Turbo Boost Without Asking Administrator Password On Turbo Boost Switcher","uri":"/posts/development/how-to-disable-turbo-boost-without-asking-administrator-password-on-turbo-boost-switcher/#run-turbo-boost-switcher-as-a-service"},{"categories":["Development"],"collections":null,"content":"Google Meet is a popular video conferencing tool, but one common issue users encounter is auto volume adjustment, which can be quite annoying during meetings. Thankfully, there\u0026rsquo;s a Chrome extension called \u0026ldquo;Disable Automatic Gain Control\u0026rdquo; that can help you stop auto volume changes on Google Meet. In this guide, we\u0026rsquo;ll walk you through the steps to install and use this extension. ","date":"30-04-2021","objectID":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/:0:0","tags":null,"title":"How to Prevent Auto Volume Changes on Google Meet in Chrome","uri":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Install the Chrome Extension To get started, you\u0026rsquo;ll need to install the \u0026ldquo;Disable Automatic Gain Control\u0026rdquo; Chrome extension. Here\u0026rsquo;s how to do it: Open your Chrome web browser. Go to the Chrome Web Store by clicking on the following link: Disable Automatic Gain Control Extension. Click the \u0026ldquo;Add to Chrome\u0026rdquo; button in the upper right-hand corner of the extension\u0026rsquo;s page. A confirmation pop-up will appear. Click \u0026ldquo;Add Extension\u0026rdquo; to install the extension. ","date":"30-04-2021","objectID":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/:0:1","tags":null,"title":"How to Prevent Auto Volume Changes on Google Meet in Chrome","uri":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/#step-1-install-the-chrome-extension"},{"categories":["Development"],"collections":null,"content":"Step 2: Configure the Extension Once the extension is installed, you can configure it to prevent auto volume changes on Google Meet. Here\u0026rsquo;s how: Click on the Chrome menu icon (three dots) in the upper right-hand corner of your browser. Select \u0026ldquo;Extensions\u0026rdquo; from the dropdown menu. Scroll down to find the \u0026ldquo;Disable Automatic Gain Control\u0026rdquo; extension and click on its \u0026ldquo;Options\u0026rdquo; link. In the extension\u0026rsquo;s settings, you\u0026rsquo;ll see options to enable or disable automatic gain control for both the microphone and speaker. You can toggle these options according to your preferences. Make sure to click the \u0026ldquo;Save\u0026rdquo; button after making your changes. ","date":"30-04-2021","objectID":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/:0:2","tags":null,"title":"How to Prevent Auto Volume Changes on Google Meet in Chrome","uri":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/#step-2-configure-the-extension"},{"categories":["Development"],"collections":null,"content":"Step 3: Join a Google Meet Meeting Now that you\u0026rsquo;ve configured the extension, you can join a Google Meet meeting without worrying about auto volume changes. Here\u0026rsquo;s how: Open Google Meet in your Chrome browser. Join a meeting or start one, as you normally would. The \u0026ldquo;Disable Automatic Gain Control\u0026rdquo; extension will work in the background to prevent any automatic volume adjustments. ","date":"30-04-2021","objectID":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/:0:3","tags":null,"title":"How to Prevent Auto Volume Changes on Google Meet in Chrome","uri":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/#step-3-join-a-google-meet-meeting"},{"categories":["Development"],"collections":null,"content":"Step 4: Test and Adjust During the meeting, it\u0026rsquo;s a good idea to test your audio to ensure that the extension is working as expected. If you encounter any issues, you can go back to the extension\u0026rsquo;s settings and make adjustments. That\u0026rsquo;s it! You\u0026rsquo;ve successfully installed and configured the \u0026ldquo;Disable Automatic Gain Control\u0026rdquo; Chrome extension to stop auto volume changes on Google Meet inside the Chrome browser. Enjoy more stable audio during your virtual meetings. Please note that browser extensions can sometimes affect the functionality of websites, so make sure to test the extension in a controlled environment before important meetings to ensure it works as intended. ","date":"30-04-2021","objectID":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/:0:4","tags":null,"title":"How to Prevent Auto Volume Changes on Google Meet in Chrome","uri":"/posts/development/how-to-prevent-auto-volume-changes-on-google-meet-in-chrome/#step-4-test-and-adjust"},{"categories":["Development"],"collections":null,"content":"Exiftool is a powerful command-line tool for manipulating and extracting metadata from a wide range of file types, including images and videos. In this guide, we will learn how to copy Exif metadata from one file to another using Exiftool. ","date":"29-04-2021","objectID":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/:0:0","tags":null,"title":"How To Copy Exif Meta Data From One File To Another Using Exiftool","uri":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have Exiftool installed on your system. You can download it from the Exiftool website. ","date":"29-04-2021","objectID":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/:0:1","tags":null,"title":"How To Copy Exif Meta Data From One File To Another Using Exiftool","uri":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Copying Exif Metadata from a Single File to Another To copy Exif metadata from one file to another, you can use the following command: exiftool -tagsfromfile source_file target_file Replace source_file with the name of the file from which you want to copy the metadata and target_file with the name of the file to which you want to copy the metadata. Here\u0026rsquo;s an example: exiftool -tagsfromfile original.mov -FileModifyDate rendered.mov This command will copy the metadata from original.mov and apply it to rendered.mov. You can customize this command to copy specific tags or all available tags depending on your needs. ","date":"29-04-2021","objectID":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/:0:2","tags":null,"title":"How To Copy Exif Meta Data From One File To Another Using Exiftool","uri":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/#copying-exif-metadata-from-a-single-file-to-another"},{"categories":["Development"],"collections":null,"content":"Copying Exif Metadata from Multiple Files with the Same Name If you have multiple files with the same name but in different directories and you want to copy Exif metadata from each source file to its corresponding target file, you can use a wildcard in the target file path. Here\u0026rsquo;s the command: exiftool -tagsfromfile original/%f.%e -FileModifyDate rendered/ In this command: %f represents the filename without extension. %e represents the file extension. Exiftool will copy the metadata from each file in the original directory to a corresponding file in the rendered directory with the same name. ","date":"29-04-2021","objectID":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/:0:3","tags":null,"title":"How To Copy Exif Meta Data From One File To Another Using Exiftool","uri":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/#copying-exif-metadata-from-multiple-files-with-the-same-name"},{"categories":["Development"],"collections":null,"content":"Conclusion Exiftool is a versatile tool for working with metadata in various file formats. With these commands, you can easily copy Exif metadata from one file to another or from multiple files with the same name. This can be particularly useful for maintaining metadata consistency when editing or rendering files. ","date":"29-04-2021","objectID":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/:0:4","tags":null,"title":"How To Copy Exif Meta Data From One File To Another Using Exiftool","uri":"/posts/development/how-to-copy-exif-meta-data-from-one-file-to-another-using-exiftool/#conclusion"},{"categories":["Development"],"collections":null,"content":"Subler is a powerful and user-friendly application for macOS that allows you to add subtitle files directly to your movie files. This can be incredibly useful if you have a movie without built-in subtitles or if you want to replace the existing subtitles with a different language or better-quality subtitles. In this guide, we will walk you through the process of merging a subtitle file with a movie using Subler. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:0:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you get started, make sure you have the following: Mac computer: Subler is compatible with macOS. Movie File: The video file to which you want to add subtitles. Subtitle File: The subtitle file you want to merge with the movie. Common subtitle formats include .srt, .sub, .ass, and .ssa. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:1:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Download and Install Subler Visit the Subler website and download the latest version of Subler. Open the downloaded .dmg file and drag the Subler app into your Applications folder. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:2:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-1-download-and-install-subler"},{"categories":["Development"],"collections":null,"content":"Step 2: Launch Subler Open Subler by going to your Applications folder and double-clicking the Subler icon. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:3:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-2-launch-subler"},{"categories":["Development"],"collections":null,"content":"Step 3: Import Your Movie File Click on \u0026ldquo;File\u0026rdquo; in the top menu and select \u0026ldquo;Open\u0026rdquo; or use the keyboard shortcut Cmd + O to open the movie file to which you want to add subtitles. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:4:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-3-import-your-movie-file"},{"categories":["Development"],"collections":null,"content":"Step 4: Import Your Subtitle File With the movie file open in Subler, click on \u0026ldquo;File\u0026rdquo; again and select \u0026ldquo;Import\u0026rdquo; or use the keyboard shortcut Cmd + I. Navigate to the location of your subtitle file and select it. Subler will automatically recognize the subtitle format. Click the \u0026ldquo;Import\u0026rdquo; button to add the subtitle file to your movie. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:5:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-4-import-your-subtitle-file"},{"categories":["Development"],"collections":null,"content":"Step 5: Edit Subtitle Settings (Optional) If you need to make adjustments to the subtitle settings, such as font size, color, or positioning, select the subtitle track in the Subler interface. Click the \u0026ldquo;Metadata\u0026rdquo; tab and make your desired changes. Click the \u0026ldquo;Save\u0026rdquo; button to apply the changes. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:6:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-5-edit-subtitle-settings-optional"},{"categories":["Development"],"collections":null,"content":"Step 6: Save the Merged Movie After importing the subtitle file and making any necessary edits, click on \u0026ldquo;File\u0026rdquo; and select \u0026ldquo;Save\u0026rdquo; or use the keyboard shortcut Cmd + S. Choose a location to save the merged movie file and give it a new name if desired. Click the \u0026ldquo;Save\u0026rdquo; button. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:7:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-6-save-the-merged-movie"},{"categories":["Development"],"collections":null,"content":"Step 7: Wait for the Merging Process Subler will begin merging the subtitle file with the movie. The time this takes depends on the length of your movie and the processing power of your Mac. Once the process is complete, Subler will display a message indicating that the file has been saved successfully. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:8:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-7-wait-for-the-merging-process"},{"categories":["Development"],"collections":null,"content":"Step 8: Play Your Merged Movie You can now close Subler and locate your merged movie file at the location you specified. Open the merged movie with your preferred media player, and the subtitles should be displayed as part of the video. Congratulations! You\u0026rsquo;ve successfully merged a subtitle file directly with a movie on your Mac using Subler. You can enjoy your movie with the added subtitles, enhancing your viewing experience. ","date":"29-04-2021","objectID":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/:9:0","tags":null,"title":"How to Merge a Subtitle File Directly with a Movie on Mac using Subler","uri":"/posts/development/how-to-merge-a-subtitle-file-directly-with-a-movie-on-mac-using-subler/#step-8-play-your-merged-movie"},{"categories":["Development"],"collections":null,"content":"When trying to run Puppeteer on Ubuntu, you may encounter the following error: error while loading shared libraries: libgbm.so.1: cannot open shared object file: No such file or directoryThis error occurs because the required library libgbm.so.1 is missing on your system. To resolve this issue, you need to install the libgbm development package. Here\u0026rsquo;s a step-by-step guide on how to fix it: ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:0:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Update Package Lists Before installing any packages, it\u0026rsquo;s always a good practice to update the package lists to ensure you are installing the latest versions available. sudo apt update ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:1:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#step-1-update-package-lists"},{"categories":["Development"],"collections":null,"content":"Step 2: Install the libgbm Development Package To install the libgbm development package, use the apt-get command as follows: sudo apt-get install -y libgbm-dev The -y flag is used to automatically confirm the installation without user intervention. ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:2:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#step-2-install-the-libgbm-development-package"},{"categories":["Development"],"collections":null,"content":"Step 3: Verify the Installation After the installation is complete, you can verify if the library has been installed successfully. ldconfig -p | grep libgbm This command will show you if the libgbm library is present in the system library cache. ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:3:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#step-3-verify-the-installation"},{"categories":["Development"],"collections":null,"content":"Step 4: Test Puppeteer Now that you have installed the necessary library, try running your Puppeteer script again. The error should no longer occur, and Puppeteer should work as expected. If you are using Puppeteer with Node.js, make sure you have it installed via npm: npm install puppeteer ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:4:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#step-4-test-puppeteer"},{"categories":["Development"],"collections":null,"content":"Conclusion By following these steps, you should have fixed the \u0026ldquo;libgbm.so.1\u0026rdquo; error and Puppeteer should now run without any issues on your Ubuntu system. ","date":"12-04-2021","objectID":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/:5:0","tags":["linux"],"title":"Fixing Puppeteer libgbm.so.1 Error on Ubuntu","uri":"/posts/development/fixing-puppeteer-libgbm-so-1-error-on-ubuntu/#conclusion"},{"categories":["Development"],"collections":null,"content":"Rsync is a powerful command-line tool for synchronizing files and directories between two locations. However, when syncing macOS files to another location, you may encounter pesky .DS_Store files, which are hidden metadata files created by the Finder. To exclude these files from your rsync operation, you can use the --exclude flag. Below is an example of how to sync files while ignoring .DS_Store files: ### Command Explanation Let\u0026#39;s break down the command step by step: - `rsync`: This is the command itself. - `--force`: This option tells rsync to overwrite files without asking for confirmation. - `-ahviP`: These are a combination of options: - `-a`: Archive mode, which preserves various attributes of files and directories. - `-h`: Output numbers in a human-readable format (e.g., 1K, 2M). - `-v`: Verbose mode, which displays detailed information about the sync process. - `-i`: Itemize changes, displaying a summary of the changes made. - `-P`: Equivalent to `--partial --progress`, it keeps partially transferred files and shows progress during transfer. - `--exclude \u0026#39;.DS_Store\u0026#39;`: This flag tells rsync to exclude any file or directory named `.DS_Store`. - `--delete`: This option deletes files in the destination that are not present in the source. Be cautious with this option as it can result in data loss if not used carefully. - `/Users/dimas/Vaults/Photos/`: This is the source directory you want to sync. - `/Volumes/example/Photos/`: This is the destination directory where you want to sync the files. ### Usage Notes 1. **Source and Destination Paths**: Make sure to replace `/Users/dimas/Vaults/Photos/` and `/Volumes/example/Photos/` with your actual source and destination paths. 2. **Be Careful with --delete**: The `--delete` option can remove files in the destination that are not in the source. Use it with caution to avoid unintentional data loss. 3. **Backup**: Before running any rsync command with the `--delete` option, ensure you have a backup of your data in case something goes wrong. 4. **Hidden Files**: `.DS_Store` files are just one example of hidden files on macOS. You can use the same `--exclude` flag to exclude other hidden files or directories if needed. With this command, you can effectively synchronize your files and directories while excluding `.DS_Store` files, keeping your destination directory clean and clutter-free. Feel free to customize the command according to your specific needs, such as excluding other hidden files or directories or adjusting the sync options to suit your preferences. ","date":"10-04-2021","objectID":"/posts/development/how-to-sync-with-rsync-but-ignore-dsstore-files/:0:0","tags":null,"title":"How to Sync with Rsync but Ignore .DS_Store Files","uri":"/posts/development/how-to-sync-with-rsync-but-ignore-dsstore-files/#"},{"categories":["Development"],"collections":null,"content":"It looks like you want to provide instructions for installing Python 3 and Java 11 on a Mac using Homebrew and setting their default paths. Here\u0026rsquo;s a step-by-step guide in Markdown format: ","date":"01-04-2021","objectID":"/posts/development/mac-install-default-path-languange/:0:0","tags":null,"title":"Mac Install Default Path Languange","uri":"/posts/development/mac-install-default-path-languange/#"},{"categories":["Development"],"collections":null,"content":"Installing Python 3 Install Python 3 using Homebrew: brew install python3 Create a symbolic link to make python point to python3: ln -s -f /usr/local/bin/python3 /usr/local/bin/python This will ensure that when you run python, it refers to Python 3. ","date":"01-04-2021","objectID":"/posts/development/mac-install-default-path-languange/:1:0","tags":null,"title":"Mac Install Default Path Languange","uri":"/posts/development/mac-install-default-path-languange/#installing-python-3"},{"categories":["Development"],"collections":null,"content":"Installing Java 11 Install AdoptOpenJDK 11 using Homebrew: brew install adoptopenjdk11 Add the following line to your ~/.bash_profile to set the JAVA_11_HOME environment variable: echo \u0026#39;export JAVA_11_HOME=$(/usr/libexec/java_home -v11)\u0026#39; \u0026gt;\u0026gt; ~/.bash_profile Create an alias to easily switch to Java 11: echo \u0026#34;alias java11=\u0026#39;export JAVA_HOME=\\$JAVA_11_HOME\u0026#39;\u0026#34; \u0026gt;\u0026gt; ~/.bash_profile Activate the Java 11 environment by running: java11 This sets JAVA_HOME to point to Java 11. You can switch between different Java versions by using the java11 alias or adjusting the JAVA_HOME environment variable. Make sure to restart your terminal or run source ~/.bash_profile to apply the changes. These instructions will help you install Python 3 and Java 11 on your Mac using Homebrew and set their default paths as per your requirements. ","date":"01-04-2021","objectID":"/posts/development/mac-install-default-path-languange/:2:0","tags":null,"title":"Mac Install Default Path Languange","uri":"/posts/development/mac-install-default-path-languange/#installing-java-11"},{"categories":["Development"],"collections":null,"content":"Bash autocompletion is a handy feature that can save you time and keystrokes when working in the terminal. It allows you to press the Tab key to automatically complete commands, file names, and more. Here, we\u0026rsquo;ll walk you through the process of enabling bash autocompletion on macOS using Homebrew. ","date":"30-03-2021","objectID":"/posts/development/how-to-enable-bash-autocompletion-on-mac/:0:0","tags":null,"title":"How to Enable Bash Autocompletion on Mac","uri":"/posts/development/how-to-enable-bash-autocompletion-on-mac/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have Homebrew installed. If you don\u0026rsquo;t have it installed, you can get it from Homebrew\u0026rsquo;s official website. ","date":"30-03-2021","objectID":"/posts/development/how-to-enable-bash-autocompletion-on-mac/:1:0","tags":null,"title":"How to Enable Bash Autocompletion on Mac","uri":"/posts/development/how-to-enable-bash-autocompletion-on-mac/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Installation Open your terminal. Install the bash-completion package using Homebrew by running the following command: brew install bash-completion After the installation is complete, the terminal will provide you with instructions on how to enable bash autocompletion. Typically, it will ask you to add a script to your .bash_profile file. ","date":"30-03-2021","objectID":"/posts/development/how-to-enable-bash-autocompletion-on-mac/:2:0","tags":null,"title":"How to Enable Bash Autocompletion on Mac","uri":"/posts/development/how-to-enable-bash-autocompletion-on-mac/#installation"},{"categories":["Development"],"collections":null,"content":"Enabling Bash Autocompletion To enable bash autocompletion, you need to add the following script to your .bash_profile file: if [ -f /usr/local/etc/profile.d/bash_completion.sh ]; then . /usr/local/etc/profile.d/bash_completion.sh fi Here\u0026rsquo;s how you can do it: Open your .bash_profile file in a text editor. You can use a command-line text editor like nano or a graphical one like Visual Studio Code. Replace nano with your preferred text editor if needed. nano ~/.bash_profile Add the following lines to the file: if [ -f /usr/local/etc/profile.d/bash_completion.sh ]; then . /usr/local/etc/profile.d/bash_completion.sh fi Save the changes and exit the text editor. To apply the changes, either restart your terminal or run the following command: source ~/.bash_profile ","date":"30-03-2021","objectID":"/posts/development/how-to-enable-bash-autocompletion-on-mac/:3:0","tags":null,"title":"How to Enable Bash Autocompletion on Mac","uri":"/posts/development/how-to-enable-bash-autocompletion-on-mac/#enabling-bash-autocompletion"},{"categories":["Development"],"collections":null,"content":"Testing Bash Autocompletion To test if bash autocompletion is working, open a new terminal window or tab, and try typing a command or file path and then press the Tab key. You should see autocompletion suggestions pop up, making your terminal usage more efficient. That\u0026rsquo;s it! You\u0026rsquo;ve successfully enabled bash autocompletion on your Mac using Homebrew. Enjoy the time-saving benefits of this feature while working in the terminal. ","date":"30-03-2021","objectID":"/posts/development/how-to-enable-bash-autocompletion-on-mac/:4:0","tags":null,"title":"How to Enable Bash Autocompletion on Mac","uri":"/posts/development/how-to-enable-bash-autocompletion-on-mac/#testing-bash-autocompletion"},{"categories":["Development"],"collections":null,"content":"iTerm2 is a popular terminal emulator for macOS that comes with a variety of features to enhance your command-line experience. One such feature is Terminal Completion, which allows you to quickly and efficiently complete commands, file paths, and more using keyboard shortcuts. In this article, we\u0026rsquo;ll explore how to use Terminal Completion in iTerm2 using the Cmd + ; shortcut. ","date":"30-03-2021","objectID":"/posts/development/using-terminal-completion-on-iterm2/:0:0","tags":null,"title":"Using Terminal Completion on iTerm2","uri":"/posts/development/using-terminal-completion-on-iterm2/#"},{"categories":["Development"],"collections":null,"content":"What is Terminal Completion? Terminal Completion, also known as tab completion, is a feature that helps you save time when typing commands in the terminal. It works by automatically suggesting and completing commands, file paths, directory names, and more as you type. This can be especially useful when working with long and complex commands or navigating through a directory structure with many nested folders. ","date":"30-03-2021","objectID":"/posts/development/using-terminal-completion-on-iterm2/:1:0","tags":null,"title":"Using Terminal Completion on iTerm2","uri":"/posts/development/using-terminal-completion-on-iterm2/#what-is-terminal-completion"},{"categories":["Development"],"collections":null,"content":"Enabling Terminal Completion in iTerm2 To use Terminal Completion in iTerm2, you first need to ensure that it\u0026rsquo;s enabled. Follow these steps: Open iTerm2: Launch iTerm2 from your Applications folder or preferred method of opening the application. Access Preferences: Click on \u0026ldquo;iTerm2\u0026rdquo; in the menu bar at the top of the screen, and then select \u0026ldquo;Preferences\u0026rdquo; or use the shortcut Cmd + ,. Navigate to Profiles: In the Preferences window, select the \u0026ldquo;Profiles\u0026rdquo; tab. Select Your Profile: Choose the profile you want to enable Terminal Completion for. You can create a new profile or modify an existing one. Edit Profile: Click the \u0026ldquo;Edit Profiles\u0026hellip;\u0026rdquo; button at the bottom of the Profiles tab. Enable Terminal Completion: In the Profile settings, go to the \u0026ldquo;Text\u0026rdquo; tab. Check \u0026ldquo;Enable Terminal Completion\u0026rdquo;: Under the \u0026ldquo;Auto-Mark URLs\u0026rdquo; section, check the box labeled \u0026ldquo;Enable Terminal Completion.\u0026rdquo; Save Profile: Click the \u0026ldquo;Save\u0026rdquo; button to save your profile settings. Terminal Completion is now enabled for the selected profile in iTerm2. ","date":"30-03-2021","objectID":"/posts/development/using-terminal-completion-on-iterm2/:2:0","tags":null,"title":"Using Terminal Completion on iTerm2","uri":"/posts/development/using-terminal-completion-on-iterm2/#enabling-terminal-completion-in-iterm2"},{"categories":["Development"],"collections":null,"content":"Using Terminal Completion Once you have enabled Terminal Completion in iTerm2, you can start using it while working in the terminal. Here\u0026rsquo;s how: Open iTerm2: Launch iTerm2 and open a terminal window using your selected profile. Start Typing a Command: Begin typing a command or a file path in the terminal. Trigger Completion: To trigger Terminal Completion, press the Cmd + ; keyboard shortcut. iTerm2 will attempt to complete the command or path based on what you\u0026rsquo;ve typed so far. Cycle Through Suggestions: If iTerm2 provides multiple suggestions, you can cycle through them by pressing Cmd + ; repeatedly until you find the desired completion. Accept Completion: Once the desired completion is shown, you can accept it by pressing the Tab key. This will replace your partially typed command or path with the completed suggestion. Execute the Command: After accepting the completion, you can press Enter to execute the command. ","date":"30-03-2021","objectID":"/posts/development/using-terminal-completion-on-iterm2/:3:0","tags":null,"title":"Using Terminal Completion on iTerm2","uri":"/posts/development/using-terminal-completion-on-iterm2/#using-terminal-completion"},{"categories":["Development"],"collections":null,"content":"Conclusion Terminal Completion in iTerm2 is a powerful feature that can significantly improve your productivity when working in the terminal. By enabling it and using the Cmd + ; shortcut, you can quickly complete commands, file paths, and more, making your command-line tasks more efficient and less error-prone. ","date":"30-03-2021","objectID":"/posts/development/using-terminal-completion-on-iterm2/:4:0","tags":null,"title":"Using Terminal Completion on iTerm2","uri":"/posts/development/using-terminal-completion-on-iterm2/#conclusion"},{"categories":["Development"],"collections":null,"content":"When working with Git repositories, it\u0026rsquo;s often convenient to store your credentials locally to avoid repeatedly entering your username and password or personal access token. Git provides several methods for managing credentials, including caching and using credential helpers. Here, we\u0026rsquo;ll discuss how to store Git credentials locally on different operating systems and how to set a token for each project folder. ","date":"29-03-2021","objectID":"/posts/development/storing-git-credentials-locally/:0:0","tags":null,"title":"Storing GIT Credentials Locally","uri":"/posts/development/storing-git-credentials-locally/#"},{"categories":["Development"],"collections":null,"content":"Storing Git Credentials Locally ","date":"29-03-2021","objectID":"/posts/development/storing-git-credentials-locally/:1:0","tags":null,"title":"Storing GIT Credentials Locally","uri":"/posts/development/storing-git-credentials-locally/#storing-git-credentials-locally"},{"categories":["Development"],"collections":null,"content":"On Linux To store Git credentials locally on a Linux system, you can use the git config command with the credential.helper setting set to cache. This will cache your credentials for a specified period, typically 15 minutes, before requiring you to re-enter them. ```bash # Open your terminal and navigate to your Git repository folder cd /path/to/your/git/repo # Set Git to cache your credentials git config --global credential.helper cache ### On macOS On macOS, you can use the `osxkeychain` credential helper, which securely stores your Git credentials in the macOS Keychain. This way, you don\u0026#39;t need to re-enter your credentials each time you interact with a Git repository. ```markdown ```bash # Open your terminal and navigate to your Git repository folder cd /path/to/your/git/repo # Set Git to use the macOS Keychain as the credential helper git config --global credential.helper osxkeychain ### On Windows On Windows, Git also provides a credential helper that can store your credentials securely. However, it typically uses the Windows Credential Manager. To enable it, you can use the `wincred` helper: ```markdown ```bash # Open your Git Bash or Command Prompt and navigate to your Git repository folder cd C:\\path\\to\\your\\git\\repo # Set Git to use the Windows Credential Manager as the credential helper git config --global credential.helper wincred ## Removing Stored Credentials If you ever need to remove the stored credentials, you can use the following command: ```markdown ```bash # Remove the credential helper configuration (Linux, macOS, and Windows) git config --unset credential.helper # Or, if you want to remove it globally git config --global --unset credential.helper ## Setting a Token for Each Project Folder To set a personal access token for a specific Git project folder, you can include it in the repository URL when you clone or modify the repository\u0026#39;s configuration in the local `.git/config` file. ### Cloning with a Token You can clone a Git repository with a personal access token directly in the URL like this: ```markdown ```bash # Clone a repository with a token included in the URL git clone https://[USERNAME]:TOKEN@git.example.net/john/shell-configuration.git ### Modifying the `.git/config` File Alternatively, you can manually edit the `.git/config` file in your local repository and add the token to the remote URL. Here\u0026#39;s an example: ```markdown ```bash # Open the .git/config file in your repository nano ~/.git/config # Inside the file, find the [remote \u0026#34;origin\u0026#34;] section and modify the URL [remote \u0026#34;origin\u0026#34;] url = https://[USERNAME]:TOKEN@git.example.net/example/example-server.git Replace `[USERNAME]` with your Git username and `TOKEN` with your personal access token. This will set the token for that specific Git repository. By following these steps, you can store Git credentials locally and set personal access tokens for specific project folders, enhancing security and convenience when working with Git repositories.","date":"29-03-2021","objectID":"/posts/development/storing-git-credentials-locally/:1:1","tags":null,"title":"Storing GIT Credentials Locally","uri":"/posts/development/storing-git-credentials-locally/#on-linux"},{"categories":["Development"],"collections":null,"content":"When working with command line interfaces (CLI), it\u0026rsquo;s essential to understand the syntax and formatting used to create and execute commands. The command line syntax can include special characters and conventions that dictate how commands should be structured and what elements are required or optional. Let\u0026rsquo;s break down the common command line syntax elements using examples from both a general explanation and a specific CLI usage sample. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:0:0","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#"},{"categories":["Development"],"collections":null,"content":"Command Line Syntax Elements ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:0","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#command-line-syntax-elements"},{"categories":["Development"],"collections":null,"content":"Square Brackets [ ] The square brackets ( [ ] ) in command line syntax indicate that the enclosed element, which can be a parameter, value, or information, is optional. Users have the choice to include one or more items within the square brackets or omit them entirely. It\u0026rsquo;s important not to type the square brackets themselves in the actual command line. Example: [global options]: This means you can include global options if needed. [source arguments]: These are optional source-related arguments. [destination arguments]: These are optional destination-related arguments. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:1","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#square-brackets--"},{"categories":["Development"],"collections":null,"content":"Angle Brackets \u0026lt; \u0026gt; Angle brackets ( \u0026lt; \u0026gt; ) signify that the enclosed element is mandatory. Users are required to replace the text within the angle brackets with the appropriate information, without typing the angle brackets themselves in the command line. Example: -f [set the File Name variable]: You must provide a value for the File Name variable. -printer \u0026lt;*printer name*\u0026gt;: Replace \u0026lt;printer name\u0026gt; with the actual printer name. -repeat \u0026lt;months\u0026gt; \u0026lt;days\u0026gt; \u0026lt;hours\u0026gt; \u0026lt;minutes\u0026gt;: You need to specify values for months, days, hours, and minutes. date access \u0026lt;mm/dd/yyyy\u0026gt;: Replace \u0026lt;mm/dd/yyyy\u0026gt; with a valid date. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:2","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#angle-brackets--"},{"categories":["Development"],"collections":null,"content":"Ellipsis \u0026hellip; The ellipsis symbol ( ... ) implies \u0026ldquo;and so on\u0026rdquo; and indicates that the preceding element can be repeated multiple times in a command line. This is often used when you can specify multiple items of the same type. Example: -jobid \u0026lt;job id1, job id2, job id3,...\u0026gt;: You can include one or more job IDs. [-exitcode \u0026lt;exit code 1\u0026gt;,\u0026lt;exit code2\u0026gt;,\u0026lt;exit code3\u0026gt; ...]: You can provide one or more exit codes. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:3","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#ellipsis-"},{"categories":["Development"],"collections":null,"content":"Pipe | The pipe symbol ( | ) denotes \u0026ldquo;or\u0026rdquo; and represents a choice within an element. If two arguments are separated by the pipe symbol, users can select either the element to the left or the one to the right of the separator. It\u0026rsquo;s not possible to choose both elements in a single command. Within square brackets, the choices are optional, while within angle brackets, at least one choice is required. Example: -ca_backup [-custom|-rotation|-gfsrotation]: You can choose one of the options, such as -custom, -rotation, or -gfsrotation. -excludeday \u0026lt;Sun|Mon|Tue|Wed|Thu|Fri|Sat\u0026gt;: You must select one of the days of the week. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:4","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#pipe-"},{"categories":["Development"],"collections":null,"content":"Italics Italic text indicates that you need to supply a value for the associated parameter or option. It serves as a placeholder for user-specific input. Example: -sessionpassword *session password*: Replace *session password* with the actual session password. -f [set the File Name variable]: Provide a value for the File Name variable. -printer \u0026lt;*printer name*\u0026gt;: Replace \u0026lt;printer name\u0026gt; with the desired printer name. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:1:5","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#italics"},{"categories":["Development"],"collections":null,"content":"CLI Usage Sample Let\u0026rsquo;s explore a specific CLI usage sample to apply the concepts discussed above: ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:2:0","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#cli-usage-sample"},{"categories":["Development"],"collections":null,"content":"Naval Fate CLI Usage: naval_fate ship new \u0026lt;name\u0026gt;... naval_fate ship \u0026lt;name\u0026gt; move \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--speed=\u0026lt;kn\u0026gt;] naval_fate ship shoot \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; naval_fate mine (set|remove) \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--moored|--drifting] naval_fate -h | --help naval_fate --version Options: -h --help: Display the help screen. --version: Show the version. --speed=\u0026lt;kn\u0026gt;: Specify the speed in knots (default is 10). --moored: Set a moored (anchored) mine. --drifting: Set a drifting mine. In this example: \u0026lt;name\u0026gt; in the naval_fate ship new \u0026lt;name\u0026gt;... command is mandatory, and you can provide one or more names. \u0026lt;name\u0026gt;, \u0026lt;x\u0026gt;, and \u0026lt;y\u0026gt; in various ship-related commands are mandatory placeholders. [--speed=\u0026lt;kn\u0026gt;] in the ship move command is optional, allowing you to specify the speed. (set|remove) in the mine command presents a choice between \u0026lsquo;set\u0026rsquo; and \u0026lsquo;remove\u0026rsquo;. [--moored|--drifting] in the mine command offers a choice between \u0026lsquo;moored\u0026rsquo; and \u0026lsquo;drifting\u0026rsquo;. -h or --help and --version are standalone options that can be used without additional arguments. Understanding these syntax elements is crucial for effectively using and constructing commands in a CLI. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:2:1","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#naval-fate-cli"},{"categories":["Development"],"collections":null,"content":"Naval Fate CLI Usage: naval_fate ship new \u0026lt;name\u0026gt;... naval_fate ship \u0026lt;name\u0026gt; move \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--speed=\u0026lt;kn\u0026gt;] naval_fate ship shoot \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; naval_fate mine (set|remove) \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--moored|--drifting] naval_fate -h | --help naval_fate --version Options: -h --help: Display the help screen. --version: Show the version. --speed=\u0026lt;kn\u0026gt;: Specify the speed in knots (default is 10). --moored: Set a moored (anchored) mine. --drifting: Set a drifting mine. In this example: \u0026lt;name\u0026gt; in the naval_fate ship new \u0026lt;name\u0026gt;... command is mandatory, and you can provide one or more names. \u0026lt;name\u0026gt;, \u0026lt;x\u0026gt;, and \u0026lt;y\u0026gt; in various ship-related commands are mandatory placeholders. [--speed=\u0026lt;kn\u0026gt;] in the ship move command is optional, allowing you to specify the speed. (set|remove) in the mine command presents a choice between \u0026lsquo;set\u0026rsquo; and \u0026lsquo;remove\u0026rsquo;. [--moored|--drifting] in the mine command offers a choice between \u0026lsquo;moored\u0026rsquo; and \u0026lsquo;drifting\u0026rsquo;. -h or --help and --version are standalone options that can be used without additional arguments. Understanding these syntax elements is crucial for effectively using and constructing commands in a CLI. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:2:1","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#usage"},{"categories":["Development"],"collections":null,"content":"Naval Fate CLI Usage: naval_fate ship new \u0026lt;name\u0026gt;... naval_fate ship \u0026lt;name\u0026gt; move \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--speed=\u0026lt;kn\u0026gt;] naval_fate ship shoot \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; naval_fate mine (set|remove) \u0026lt;x\u0026gt; \u0026lt;y\u0026gt; [--moored|--drifting] naval_fate -h | --help naval_fate --version Options: -h --help: Display the help screen. --version: Show the version. --speed=\u0026lt;kn\u0026gt;: Specify the speed in knots (default is 10). --moored: Set a moored (anchored) mine. --drifting: Set a drifting mine. In this example: \u0026lt;name\u0026gt; in the naval_fate ship new \u0026lt;name\u0026gt;... command is mandatory, and you can provide one or more names. \u0026lt;name\u0026gt;, \u0026lt;x\u0026gt;, and \u0026lt;y\u0026gt; in various ship-related commands are mandatory placeholders. [--speed=\u0026lt;kn\u0026gt;] in the ship move command is optional, allowing you to specify the speed. (set|remove) in the mine command presents a choice between \u0026lsquo;set\u0026rsquo; and \u0026lsquo;remove\u0026rsquo;. [--moored|--drifting] in the mine command offers a choice between \u0026lsquo;moored\u0026rsquo; and \u0026lsquo;drifting\u0026rsquo;. -h or --help and --version are standalone options that can be used without additional arguments. Understanding these syntax elements is crucial for effectively using and constructing commands in a CLI. ","date":"29-03-2021","objectID":"/posts/development/understanding-command-line-syntax/:2:1","tags":null,"title":"Understanding Command Line Syntax","uri":"/posts/development/understanding-command-line-syntax/#options"},{"categories":["Development"],"collections":null,"content":"In SSH configuration files, you can define multiple hosts with different settings. To list all the defined hosts in your SSH config file using the sed command, you can use the following command: ```bash sed -n \u0026#39;/^#/!s/Host //p\u0026#39; ~/.ssh/config Here\u0026#39;s what this command does: - `sed` is a stream editor for filtering and transforming text. - `-n` tells `sed` to suppress automatic printing. - `/^#/!s/Host //p` is a `sed` expression: - `/^#/` matches lines that start with `#`, which are comments in the SSH config file. - `!` negates the match, so it selects lines that do not start with `#`. - `s/Host //` replaces the word \u0026#34;Host\u0026#34; with an empty string, effectively removing it from the line. - `p` instructs `sed` to print the modified lines. When you run this command, it will display a list of host names defined in your SSH config file, excluding any commented-out entries. This can be useful to quickly see the configured hosts on your system.","date":"28-03-2021","objectID":"/posts/development/list-ssh-config-host/:0:0","tags":null,"title":"List SSH Config Host","uri":"/posts/development/list-ssh-config-host/#"},{"categories":["Development"],"collections":null,"content":"Secure Shell (SSH) is a widely-used protocol for securely connecting to remote servers over an untrusted network, such as the internet. SSH ensures the confidentiality and integrity of data exchanged between the client and server. However, managing SSH connections to servers with complex network configurations can be challenging. This is where SSH Proxy Jump, also known as SSH Jump Host or SSH Bastion Host, comes in handy. SSH Proxy Jump allows you to connect to a target server through an intermediate server, known as a jump host or bastion host. This intermediate server acts as a gateway, helping you traverse complex network topologies while maintaining security. In this article, we\u0026rsquo;ll explore how to use SSH Proxy Jump both via the command line and through SSH configuration files. ","date":"25-03-2021","objectID":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/:0:0","tags":null,"title":"SSH Proxy Jump: Simplifying Secure SSH Connections","uri":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/#"},{"categories":["Development"],"collections":null,"content":"Via Command Line You can establish an SSH connection using Proxy Jump directly from the command line. Here\u0026rsquo;s the basic syntax: ssh -J jump_host:port target_host -t command In this command: jump_host:port is the address and port of the jump host. target_host is the address of the final destination. command is an optional command to run on the target host. For example, to open a Vim session on the example-storage-server via the origin.example.net jump host on port 667, you can use the following command: ssh -J origin.example.net:667 example-storage-server -t vim ","date":"25-03-2021","objectID":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/:1:0","tags":null,"title":"SSH Proxy Jump: Simplifying Secure SSH Connections","uri":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/#via-command-line"},{"categories":["Development"],"collections":null,"content":"Via SSH Configuration File Using SSH Proxy Jump via the command line can be convenient for one-off connections. However, if you frequently connect to servers through a jump host, it\u0026rsquo;s more practical to configure SSH to do this automatically. You can achieve this by editing the SSH configuration file, typically located at ~/.ssh/config. Here\u0026rsquo;s an example of how to set up SSH Proxy Jump in your configuration file: Host jump_host HostName origin.example.net Port 667 User root Host target_host HostName example-storage-server.local User root ProxyJump jump_host In this example: jump_host is an alias for the jump host with its hostname, port, and user specified. target_host is an alias for the final destination server. ProxyJump jump_host tells SSH to use jump_host as the intermediary to connect to target_host. With this configuration in place, you can connect to the example-storage-server with a simple command: ssh target_host SSH will automatically use the jump_host as the proxy to reach the example-storage-server. ","date":"25-03-2021","objectID":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/:2:0","tags":null,"title":"SSH Proxy Jump: Simplifying Secure SSH Connections","uri":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/#via-ssh-configuration-file"},{"categories":["Development"],"collections":null,"content":"Conclusion SSH Proxy Jump is a powerful feature that simplifies secure SSH connections, especially in complex network environments. Whether you prefer using the command line or SSH configuration files, Proxy Jump can make your remote server management more efficient and secure. By following the examples provided in this article, you can easily set up SSH Proxy Jump to streamline your SSH connections. ","date":"25-03-2021","objectID":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/:3:0","tags":null,"title":"SSH Proxy Jump: Simplifying Secure SSH Connections","uri":"/posts/development/ssh-proxy-jump-simplifying-secure-ssh-connections/#conclusion"},{"categories":["Development"],"collections":null,"content":"Vim is a powerful text editor that offers a wide range of features and commands to enhance your productivity. In this article, we will explore two essential Vim commands that will help you manage log messages and check mapping details with plugins. ","date":"21-03-2021","objectID":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/:0:0","tags":null,"title":"Essential Vim Commands for Log Messages and Plugin Mapping Details","uri":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/#"},{"categories":["Development"],"collections":null,"content":"1. Viewing Log Messages In Vim, log messages can be useful for debugging or understanding the history of your editing session. To view all log messages, you can use the :messages command. Here\u0026rsquo;s how it works: :messages When you execute this command, Vim will display a list of recent messages in a split window. These messages can include information about recent commands, error messages, or any other informative output. You can navigate through these messages using normal Vim navigation commands. ","date":"21-03-2021","objectID":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/:1:0","tags":null,"title":"Essential Vim Commands for Log Messages and Plugin Mapping Details","uri":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/#1-viewing-log-messages"},{"categories":["Development"],"collections":null,"content":"2. Checking Mappings with Plugins Vim plugins often define custom key mappings to enhance functionality. To check all the mappings related to a specific plugin and see their details, you can use the :verbose command followed by the inoremap or imap command. Here\u0026rsquo;s how you can do it: :verbose inoremap This command will list all the insert mode mappings (keybindings) and display detailed information about which plugin or script defined each mapping. You can replace inoremap with imap to list all insert mode mappings without the details about their origin. When you execute this command, Vim will show you a list of mappings, and for each mapping, it will specify the file and line number where the mapping was defined. This information can be invaluable for troubleshooting or customizing your Vim environment. ","date":"21-03-2021","objectID":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/:2:0","tags":null,"title":"Essential Vim Commands for Log Messages and Plugin Mapping Details","uri":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/#2-checking-mappings-with-plugins"},{"categories":["Development"],"collections":null,"content":"Conclusion These two Vim commands, :messages and :verbose inoremap, are essential for managing log messages and checking mapping details with plugins. They provide valuable insights into your Vim session and help you troubleshoot and customize your editor effectively. Incorporate these commands into your Vim workflow to become a more efficient and productive Vim user. ","date":"21-03-2021","objectID":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/:3:0","tags":null,"title":"Essential Vim Commands for Log Messages and Plugin Mapping Details","uri":"/posts/development/essential-vim-commands-for-log-messages-and-plugin-mapping-details/#conclusion"},{"categories":["Development"],"collections":null,"content":"To connect to a pod in Kubernetes using kubectl port-forward, you can follow the command you provided: kubectl port-forward kubernetes-dashboard-7798c48646-ctrtl 8443:8443 --namespace=kube-system This command is useful when you want to access a service running inside a Kubernetes pod from your local machine. Here\u0026rsquo;s a breakdown of the command: kubectl port-forward: This is the command for port forwarding in Kubernetes. kubernetes-dashboard-7798c48646-ctrtl: This is the name of the pod you want to connect to. Replace it with the actual name of the pod you want to access. 8443:8443: This specifies the port forwarding configuration. It forwards port 8443 on your local machine to port 8443 on the pod. You can adjust the port numbers as needed. --namespace=kube-system: This flag specifies the namespace in which the pod is located. In this case, it\u0026rsquo;s in the kube-system namespace. After running this command, you can access the service running inside the pod on your local machine by connecting to https://localhost:8443 in your web browser. Make sure that the service you want to access is listening on port 8443 inside the pod for this to work. Remember to replace kubernetes-dashboard-7798c48646-ctrtl with the actual name of the pod you want to connect to, and ensure that the pod is running and accessible in the specified namespace. ","date":"20-03-2021","objectID":"/posts/development/connect-kubectl-pods-kubernetes/:0:0","tags":null,"title":"Connect Kubectl pods Kubernetes","uri":"/posts/development/connect-kubectl-pods-kubernetes/#"},{"categories":["Development"],"collections":null,"content":"The macOS Terminal.app uses a series of scripts and configuration files to set up the shell environment before you see the command prompt. Here\u0026rsquo;s an overview of these files and how they are executed: /etc/profile: This is a system-wide configuration file that is executed for all users when they start a new shell session. It sets up environment variables and configurations that are applicable to all users. /etc/bashrc: This file is also system-wide and is typically sourced (executed) by /etc/profile. It can contain additional configurations and environment variables that apply to all users and all shell sessions. /etc/bashrc_Apple_Terminal: This is a macOS-specific extension of the /etc/bashrc file. It contains configurations that are specifically intended for Terminal.app. ~/.bash_profile: This is a user-specific configuration file that is executed when a user logs in. If this file exists, it takes precedence over other configuration files. Users can use this file to set up their own custom environment variables and configurations. ~/.bash_login: If ~/.bash_profile does not exist, the shell looks for ~/.bash_login and executes it if it\u0026rsquo;s present. This file is another option for users to customize their shell environment. ~/.profile: If neither ~/.bash_profile nor ~/.bash_login exists, the shell will use ~/.profile for user-specific configurations. ~/.bashrc: Users can optionally choose to source (execute) ~/.bashrc from their ~/.bash_profile. This allows them to keep common configurations in ~/.bashrc and have them applied to both login and non-login shell sessions. These files and their execution order allow users and system administrators to customize and configure their shell environment to meet their specific needs. It\u0026rsquo;s important to note that these files are typically written in Bash script, so you can use them to define environment variables, customize the prompt, set aliases, and more to tailor your shell experience on macOS. ","date":"19-03-2021","objectID":"/posts/development/about-bashprofile-and-bashrc-on-mac-os/:0:0","tags":null,"title":"About Bash Profile And Bash.Rc On Mac OS","uri":"/posts/development/about-bashprofile-and-bashrc-on-mac-os/#"},{"categories":["Development"],"collections":null,"content":"If you want to move all files from Google Drive except for the ones with specific Google extensions (.gshortcut, .gdoc, .gsheet, .gslides, .gform, .gjam, .gmap, .gsite), you can use the rsync command with the --exclude option to specify the file extensions to be excluded. Additionally, you can exclude common system files like \u0026ldquo;Icon?\u0026rdquo; and \u0026ldquo;.DS_Store\u0026rdquo; to avoid transferring them. Here\u0026rsquo;s the command to achieve this: rsync -avP --exclude=\u0026#34;*.gshortcut\u0026#34; --exclude=\u0026#34;*.gdoc\u0026#34; --exclude=\u0026#34;*.gsheet\u0026#34; --exclude=\u0026#34;*.gslides\u0026#34; --exclude=\u0026#34;*.gform\u0026#34; --exclude=\u0026#34;*.gjam\u0026#34; --exclude=\u0026#34;*.gmap\u0026#34; --exclude=\u0026#34;*.gsite\u0026#34; --exclude=\u0026#34;Icon?\u0026#34; --exclude=\u0026#34;.DS_Store\u0026#34; --remove-source-files \u0026#34;Google Drive (example@gmail.com)/example/\u0026#34; ./temp Explanation of the command: rsync: The command for syncing files and directories. -avP: Options for rsync: -a: Archive mode, which preserves permissions, ownership, timestamps, and more. -v: Verbose mode, which displays detailed information about the syncing process. -P: Progress option, which shows progress during the transfer and allows you to resume interrupted transfers. --exclude: This option specifies the patterns for files or directories to be excluded from the sync. You\u0026rsquo;ve listed all the Google extension file types to be excluded, as well as \u0026ldquo;Icon?\u0026rdquo; and \u0026ldquo;.DS_Store\u0026rdquo; files. --remove-source-files: This option removes the transferred files from the source directory after a successful transfer. \u0026quot;Google Drive (example@gmail.com)/example/\u0026quot;: This is the source directory path. Make sure to adjust it to the actual path of your Google Drive. ./temp: This is the destination directory where the files will be moved to. You can change this path as needed. Please replace \u0026quot;Google Drive (example@gmail.com)/example/\u0026quot; with the actual path to your Google Drive directory, and \u0026quot;./temp\u0026quot; with the desired destination directory where you want to move the files. After running this command, all the specified files from your Google Drive (except those with the specified extensions) will be moved to the destination directory, and the source files will be removed. ","date":"07-02-2021","objectID":"/posts/development/moving-files-from-google-drive-except-google-drive-files/:0:0","tags":null,"title":"Moving Files from Google Drive Except Google Drive Files","uri":"/posts/development/moving-files-from-google-drive-except-google-drive-files/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re a macOS user and want to run a shell script by double-clicking it in the Finder, you might encounter an issue with the working directory. By default, the working directory of the script becomes the user\u0026rsquo;s home directory, which can lead to unexpected behavior if your script relies on relative file paths. To ensure that your script runs with the correct working directory, you can include a specific command at the beginning of your script. ","date":"22-01-2021","objectID":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/:0:0","tags":null,"title":"Running a Shell Script from Finder and Keeping the Filepath","uri":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/#"},{"categories":["Development"],"collections":null,"content":"The Issue When you double-click a shell script in the Finder, the script is executed, but the working directory is set to your home directory (~). This can cause problems if your script references files or paths that are relative to the location of the script itself. ","date":"22-01-2021","objectID":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/:1:0","tags":null,"title":"Running a Shell Script from Finder and Keeping the Filepath","uri":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/#the-issue"},{"categories":["Development"],"collections":null,"content":"The Solution To ensure that your script runs with the correct working directory – the directory where the script is located – you can include a few lines of code at the beginning of your script. Here\u0026rsquo;s how you can do it: #!/bin/bash # Change the working directory to the location of the script cd -- \u0026#34;$(dirname \u0026#34;$0\u0026#34;)\u0026#34; # Your script commands go here # For example: echo \u0026#34;Running script from $(pwd)\u0026#34; Let\u0026rsquo;s break down what this code does: #!/bin/bash: This is known as a shebang line. It specifies the interpreter (in this case, Bash) that should be used to run the script. cd -- \u0026quot;$(dirname \u0026quot;$0\u0026quot;)\u0026quot;: This line changes the working directory to the location of the script. The \u0026quot;$0\u0026quot; refers to the path of the script itself, and dirname extracts the directory name from the path. The cd command then changes the directory to the script\u0026rsquo;s location. After the cd command, you can include your script\u0026rsquo;s actual commands. In the example above, we\u0026rsquo;ve included a simple echo command to demonstrate that the script is indeed running in the correct directory. By including these lines at the beginning of your shell script, you ensure that it always runs with the correct working directory. ","date":"22-01-2021","objectID":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/:2:0","tags":null,"title":"Running a Shell Script from Finder and Keeping the Filepath","uri":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/#the-solution"},{"categories":["Development"],"collections":null,"content":"Implementation Open a text editor of your choice, such as TextEdit, Visual Studio Code, or Sublime Text. Copy and paste the code snippet mentioned above into the text editor. Replace the placeholder commands (echo \u0026quot;Running script from $(pwd)\u0026quot;) with your actual script\u0026rsquo;s commands. Save the file with a .sh extension. For example, you could save it as myscript.sh. Open the Terminal and navigate to the directory where you saved the script. Make the script executable by running the following command in the Terminal: chmod +x myscript.sh Close the Terminal. Now, when you double-click the myscript.sh file in the Finder, it will execute with the correct working directory. ","date":"22-01-2021","objectID":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/:3:0","tags":null,"title":"Running a Shell Script from Finder and Keeping the Filepath","uri":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/#implementation"},{"categories":["Development"],"collections":null,"content":"Conclusion Running a shell script by double-clicking it in the Finder can be convenient, but it\u0026rsquo;s important to ensure that the script runs with the correct working directory. By adding a few lines of code to change the directory to the script\u0026rsquo;s location, you can avoid unexpected issues related to file paths and ensure that your script behaves as expected. Feel free to customize the example and instructions based on your specific use case and preferences. Just make sure to include the necessary steps for creating and running the shell script with the correct working directory. ","date":"22-01-2021","objectID":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/:4:0","tags":null,"title":"Running a Shell Script from Finder and Keeping the Filepath","uri":"/posts/development/running-a-shell-script-from-finder-and-keeping-the-filepath/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re trying to run Selenium with Chrome headless on Ubuntu WSL and encountering issues, an alternative approach is to install the Selenium Standalone Server on your Windows 10 machine and then connect to it remotely from the Ubuntu WSL terminal. This can help you overcome the limitations of running Chrome headless directly within WSL. Here\u0026rsquo;s a step-by-step guide on how to achieve this: ","date":"22-01-2021","objectID":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/:0:0","tags":null,"title":"Running Selenium on Windows 10 from Ubuntu WSL","uri":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Set Up Selenium Standalone Server on Windows 10 Install Java: Ensure you have Java installed on your Windows 10 machine since Selenium requires it to run. You can download and install Java from the official website. Download Selenium Standalone Server: Visit the Selenium Downloads page and download the latest version of the Selenium Standalone Server (selenium-server-standalone-x.xx.x.jar). Start Selenium Server: Open Command Prompt on Windows and navigate to the directory where you\u0026rsquo;ve downloaded the Selenium Standalone Server JAR file. Run the following command to start the server: java -jar selenium-server-standalone-x.xx.x.jar Make sure to replace x.xx.x with the actual version number you\u0026rsquo;ve downloaded. The Selenium Server should now be running and listening for incoming WebDriver requests. ","date":"22-01-2021","objectID":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/:1:0","tags":null,"title":"Running Selenium on Windows 10 from Ubuntu WSL","uri":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/#step-1-set-up-selenium-standalone-server-on-windows-10"},{"categories":["Development"],"collections":null,"content":"Step 2: Connect from Ubuntu WSL Install Dependencies: In your Ubuntu WSL terminal, you\u0026rsquo;ll need to install the required dependencies, including Python and the Selenium Python package. If you haven\u0026rsquo;t already, you can do this using: sudo apt update sudo apt install python3 python3-pip pip3 install selenium Write Python Script: Create a Python script (e.g., remote_selenium.py) with the following content: from selenium import webdriver # Replace \u0026#39;YOUR_WINDOWS_IP\u0026#39; with the actual IP address of your Windows 10 machine windows_ip = \u0026#39;YOUR_WINDOWS_IP\u0026#39; windows_port = 4444 # Default port for Selenium Server # Set up Chrome options chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(\u0026#39;--headless\u0026#39;) chrome_options.add_argument(\u0026#39;--no-sandbox\u0026#39;) chrome_options.add_argument(\u0026#39;--disable-dev-shm-usage\u0026#39;) # Connect to the remote Selenium Server remote_url = f\u0026#39;http://{windows_ip}:{windows_port}/wd/hub\u0026#39; driver = webdriver.Remote(command_executor=remote_url, desired_capabilities=chrome_options.to_capabilities()) # Now you can use \u0026#39;driver\u0026#39; to interact with the remote Chrome instance # For example: driver.get(\u0026#39;https://www.example.com\u0026#39;) print(driver.title) # Don\u0026#39;t forget to close the driver driver.quit() Replace 'YOUR_WINDOWS_IP' with the actual IP address of your Windows 10 machine. Run the Script: Execute the Python script using Python 3 in your Ubuntu WSL terminal: python3 remote_selenium.py This setup allows you to run Selenium-driven Chrome headless instances remotely from your Windows 10 machine while interacting with them from your Ubuntu WSL terminal. Remember that this approach requires network connectivity between your WSL instance and Windows machine. Also, ensure that any firewalls or security settings don\u0026rsquo;t block the communication between them. ","date":"22-01-2021","objectID":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/:2:0","tags":null,"title":"Running Selenium on Windows 10 from Ubuntu WSL","uri":"/posts/development/running-selenium-on-windows-10-from-ubuntu-wsl/#step-2-connect-from-ubuntu-wsl"},{"categories":["Development"],"collections":null,"content":"To resize a file to a specific size and remove lines from the center of the file in Ubuntu, you can use a combination of commands like truncate and head and tail. Here\u0026rsquo;s how you can achieve this: Resize the File to 10MB using truncate: To resize a file to a specific size, you can use the truncate command with the --size (-s) option. In this case, we want to resize the file other_vhosts_access.log to 10MB: truncate --size 10M other_vhosts_access.log This command will resize the file other_vhosts_access.log to exactly 10 megabytes. If the file was larger than 10MB, it will be truncated, and if it was smaller, it will be padded with null bytes. Remove Lines from the Center of the File: To remove lines from the center of a file, you can use a combination of head and tail commands. For example, if you want to remove lines from the center of the file, leaving only the first 1000 lines and the last 1000 lines, you can do the following: head -n 1000 other_vhosts_access.log \u0026gt; temp.log tail -n 1000 other_vhosts_access.log \u0026gt;\u0026gt; temp.log mv temp.log other_vhosts_access.log Here\u0026rsquo;s what each command does: head -n 1000 other_vhosts_access.log: This command extracts the first 1000 lines of the original file and redirects them to a temporary file called temp.log. tail -n 1000 other_vhosts_access.log: This command extracts the last 1000 lines of the original file and appends them to the temp.log file. mv temp.log other_vhosts_access.log: Finally, this command renames temp.log to other_vhosts_access.log, effectively replacing the original file with the modified one. This sequence of commands will resize the file to 10MB and remove lines from the center, leaving only the first and last 1000 lines in the other_vhosts_access.log file. ","date":"12-01-2021","objectID":"/posts/development/resizing-a-file-in-ubuntu-by-a-specific-size-and-removing-lines-from-the-center/:0:0","tags":null,"title":"Resizing a File in Ubuntu by a Specific Size and Removing Lines from the Center","uri":"/posts/development/resizing-a-file-in-ubuntu-by-a-specific-size-and-removing-lines-from-the-center/#"},{"categories":["Development"],"collections":null,"content":"Mitmproxy is a powerful tool that allows you to intercept, modify, and inspect network traffic. It\u0026rsquo;s commonly used for debugging, security testing, and analyzing HTTP/HTTPS traffic. In this article, we\u0026rsquo;ll explore how to trace HTTPS requests using mitmproxy, both with a regular proxy and a transparent proxy setup. ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:0","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you start, make sure you have mitmproxy installed. You can install it using pip: pip install mitmproxy ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:1","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Regular Proxy Setup Launch mitmproxy: Start mitmproxy by running the following command: mitmproxy Configure Proxy Settings: Configure your device or application to use mitmproxy as a regular proxy. The proxy settings should point to 127.0.0.1 (localhost) on port 8080, which is the default port mitmproxy listens on. Inspect HTTPS Traffic: As you use your device or application, mitmproxy will intercept the traffic. You can navigate through the mitmproxy interface using the command-line keys. To view detailed information about a request or response, select it and press Enter. Note that mitmproxy generates a self-signed SSL certificate for the intercepted domains. This can trigger security warnings in your browser or application. You can either choose to trust the certificate or install mitmproxy\u0026rsquo;s root certificate on your device. ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:2","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#regular-proxy-setup"},{"categories":["Development"],"collections":null,"content":"Transparent Proxy Setup Enable IP Forwarding: For transparent proxying, you need to enable IP forwarding on your machine: echo 1 \u0026gt; /proc/sys/net/ipv4/ip_forward Configure iptables: Set up iptables rules to redirect traffic to the mitmproxy port (8080 by default): iptables -t nat -A PREROUTING -p tcp --destination-port 443 -j REDIRECT --to-port 8080 This rule redirects all outgoing HTTPS traffic to mitmproxy. Launch mitmproxy: Start mitmproxy as before: mitmproxy Inspect Transparently Proxied Traffic: Since the traffic is now being transparently redirected through mitmproxy, you don\u0026rsquo;t need to manually configure proxy settings on your device or application. Simply use your device normally, and mitmproxy will intercept the HTTPS traffic. ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:3","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#transparent-proxy-setup"},{"categories":["Development"],"collections":null,"content":"Mitigating Certificate Errors When intercepting HTTPS traffic, mitmproxy generates its own SSL certificate for the intercepted domains. This can cause SSL/TLS certificate errors in your browser or application. To avoid this, you can install mitmproxy\u0026rsquo;s root certificate on your device. The certificate can be found in the mitmproxy data directory. ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:4","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#mitigating-certificate-errors"},{"categories":["Development"],"collections":null,"content":"Conclusion Mitmproxy is a versatile tool for tracing HTTPS requests using both regular and transparent proxy setups. Whether you\u0026rsquo;re debugging network issues, analyzing application behavior, or conducting security assessments, mitmproxy provides a powerful way to intercept and inspect encrypted traffic. Just remember that intercepting HTTPS traffic should be done responsibly and only on systems you have the legal right to control. ","date":"11-01-2021","objectID":"/posts/development/tracing-https-requests-using-mitmproxy/:0:5","tags":null,"title":"Tracing HTTPS Requests Using mitmproxy","uri":"/posts/development/tracing-https-requests-using-mitmproxy/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re running Ubuntu, you might accumulate a large amount of journal logs over time. These logs can take up valuable disk space. Fortunately, you can easily delete old journal logs to free up space on your system. Here\u0026rsquo;s how you can do it using the journalctl command with the --vacuum-time option. ","date":"01-01-2021","objectID":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/:0:0","tags":null,"title":"How to Delete Old Journal Logs in Ubuntu","uri":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Deleting Old Journal Logs To delete old journal logs in Ubuntu, follow these steps: Open a terminal window. You can do this by pressing Ctrl+Alt+T or searching for \u0026ldquo;Terminal\u0026rdquo; in the application menu. In the terminal, use the journalctl command with the --vacuum-time option to specify the time period for which you want to keep the logs. For example, to delete journal logs older than 30 days, you can run the following command: sudo journalctl --vacuum-time=30d The 30d parameter specifies that logs older than 30 days should be deleted. You can adjust the number of days to your preference. You will be prompted to confirm the action. Type y and press Enter to proceed with the deletion. The command will clean up the old journal logs, and you\u0026rsquo;ll see a summary of how much disk space was freed. ","date":"01-01-2021","objectID":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/:1:0","tags":null,"title":"How to Delete Old Journal Logs in Ubuntu","uri":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/#deleting-old-journal-logs"},{"categories":["Development"],"collections":null,"content":"Checking Disk Space Usage If you want to check how much disk space your journal logs are currently occupying before and after the cleanup, you can use the journalctl command with the --disk-usage option: journalctl --disk-usage This command will display the current disk usage of your journal logs. That\u0026rsquo;s it! You\u0026rsquo;ve successfully deleted old journal logs on your Ubuntu system, freeing up disk space in the process. Regularly cleaning up old logs can help keep your system running smoothly and prevent unnecessary disk space usage. ","date":"01-01-2021","objectID":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/:2:0","tags":null,"title":"How to Delete Old Journal Logs in Ubuntu","uri":"/posts/development/how-to-delete-old-journal-logs-in-ubuntu/#checking-disk-space-usage"},{"categories":["Development"],"collections":null,"content":"In Bash, you can check whether the shell is running in an interactive or login mode using the provided commands. Here\u0026rsquo;s an explanation of each command and what it checks for: [[ $- == *i* ]] \u0026amp;\u0026amp; echo 'Interactive' || echo 'Not interactive': This command checks the value of the special shell variable $-, which contains a string of options and flags that are currently set for the shell. The *i* pattern is used to check if the letter \u0026lsquo;i\u0026rsquo; appears anywhere in the value of $-. If it does, it indicates that the shell is running in interactive mode. If \u0026lsquo;i\u0026rsquo; is found, it echoes \u0026lsquo;Interactive\u0026rsquo;, otherwise, it echoes \u0026lsquo;Not interactive\u0026rsquo;. shopt -q login_shell \u0026amp;\u0026amp; echo 'Login shell' || echo 'Not login shell': This command uses the shopt built-in command to check the status of a shell option called login_shell. If shopt -q login_shell returns true (exit status 0), it means that the shell is a login shell, so it echoes \u0026lsquo;Login shell\u0026rsquo;. If shopt -q login_shell returns false (exit status non-zero), it means that the shell is not a login shell, so it echoes \u0026lsquo;Not login shell\u0026rsquo;. You can use these commands in your Bash scripts or in the terminal to determine whether the shell is running interactively or as a login shell. Here\u0026rsquo;s the code in markdown format: ```bash # Check if the shell is running in interactive or non-interactive mode [[ $- == *i* ]] \u0026amp;\u0026amp; echo \u0026#39;Interactive\u0026#39; || echo \u0026#39;Not interactive\u0026#39; # Check if the shell is a login shell or not shopt -q login_shell \u0026amp;\u0026amp; echo \u0026#39;Login shell\u0026#39; || echo \u0026#39;Not login shell\u0026#39; You can run this code in a Bash terminal to see the output based on the current shell\u0026#39;s mode.","date":"11-12-2020","objectID":"/posts/development/bash-check-if-shell-on-interactive-or-login/:0:0","tags":null,"title":"Bash Check if Shell on Interactive or Login","uri":"/posts/development/bash-check-if-shell-on-interactive-or-login/#"},{"categories":["Development"],"collections":null,"content":"When copying and pasting source code into the VIM text editor, you might encounter issues with incorrect indentation due to the way VIM handles auto-indentation. This happens because VIM\u0026rsquo;s default behavior tries to adjust the indentation based on the surrounding code, which can lead to unwanted results when pasting code from external sources. To prevent this, you can use the :set paste and :set nopaste commands to toggle the paste mode. Here\u0026rsquo;s how to do it: Enter Paste Mode To prevent VIM from automatically adjusting the indentation while pasting code, follow these steps: Open the terminal or command prompt. Launch VIM by typing vim followed by the name of the file you want to edit. Press Esc to ensure you\u0026rsquo;re in normal mode. Enter paste mode by typing :set paste and pressing Enter. You won\u0026rsquo;t see any visual feedback, but VIM is now in paste mode. Paste the Code Now you can paste the code you want to insert from your clipboard: Right-click on the terminal or press Ctrl+Shift+V to paste the code. The code will be inserted without any automatic indentation adjustments. Exit Paste Mode After pasting the code, you should exit paste mode to restore VIM\u0026rsquo;s normal behavior: Press Esc to ensure you\u0026rsquo;re in normal mode. Type :set nopaste and press Enter to exit paste mode. Again, there won\u0026rsquo;t be much visual feedback, but VIM is now back to its regular indentation behavior. Adjust Indentation Manually (Optional) If the pasted code requires some manual adjustments, you can make them after exiting paste mode. Navigate through the code using VIM\u0026rsquo;s navigation keys (h, j, k, l) and adjust the indentation as needed using \u0026gt;\u0026gt; to indent right and \u0026lt;\u0026lt; to indent left. Save and Exit Once you\u0026rsquo;ve pasted and adjusted the code, you can save the changes and exit VIM: Press Esc to ensure you\u0026rsquo;re in normal mode. Type :w and press Enter to save the changes. Type :q and press Enter to exit VIM. By using the :set paste and :set nopaste commands, you can easily control VIM\u0026rsquo;s indentation behavior while pasting source code, preventing unwanted auto-indentation adjustments. This method ensures that your code remains correctly formatted and reduces the need for manual adjustments after pasting. ","date":"11-12-2020","objectID":"/posts/development/how-to-fix-wrong-indentation-when-pasting-source-code-in-vim/:0:0","tags":null,"title":"How to Fix Wrong Indentation When Pasting Source Code in VIM","uri":"/posts/development/how-to-fix-wrong-indentation-when-pasting-source-code-in-vim/#"},{"categories":["Development"],"collections":null,"content":"WSL 2 (Windows Subsystem for Linux 2) has gained significant popularity among developers for its ability to run a full Linux kernel alongside the Windows operating system. However, one common issue that users may face is RAM overload on the host machine. This can lead to performance degradation and even system crashes. One approach to mitigate this problem is by limiting memory and processor usage for WSL 2 by editing the wsl.conf configuration file. ","date":"08-12-2020","objectID":"/posts/development/wsl2-ram-overload-om-host-draftmd/:0:0","tags":null,"title":"Wsl2 Ram Overload Om Host (draft)","uri":"/posts/development/wsl2-ram-overload-om-host-draftmd/#"},{"categories":["Development"],"collections":null,"content":"Understanding the Problem WSL 2 operates by running a lightweight virtual machine (VM) that provides a Linux environment within the host Windows system. While this VM offers a seamless experience, it can consume a substantial amount of system resources, especially RAM. If not managed properly, this can lead to an excessive consumption of memory, causing other applications to slow down or crash. ","date":"08-12-2020","objectID":"/posts/development/wsl2-ram-overload-om-host-draftmd/:1:0","tags":null,"title":"Wsl2 Ram Overload Om Host (draft)","uri":"/posts/development/wsl2-ram-overload-om-host-draftmd/#understanding-the-problem"},{"categories":["Development"],"collections":null,"content":"Editing wsl.conf To address the issue of RAM overload, you can configure WSL 2 to limit the amount of memory and processor resources it can use. This is achieved by editing the wsl.conf configuration file, which allows you to fine-tune various settings for your WSL 2 instances. Here\u0026rsquo;s how you can proceed: Locate wsl.conf: This configuration file is typically located in the /etc/wsl.conf path within your WSL 2 distribution. Open wsl.conf: You can use a command-line text editor such as nano or vim to open and edit the wsl.conf file. For instance, if you are using Ubuntu, you can run the following command in your WSL terminal: sudo nano /etc/wsl.conf Add Resource Limits: Within the wsl.conf file, you can set resource limits by specifying the maximum amount of memory and CPU resources that WSL 2 can utilize. Use the following syntax: [wsl2] memory=2GB # Limit the memory to 2GB processors=2 # Limit to 2 virtual processorsYou can adjust the values according to your system\u0026rsquo;s capabilities and requirements. Save and Exit: After making the necessary changes, save the file and exit the text editor. Restart WSL 2: To apply the changes, you need to restart your WSL 2 instance. You can do this by opening a PowerShell window as an administrator and running the following command: wsl --shutdown Afterward, start your WSL distribution again. ","date":"08-12-2020","objectID":"/posts/development/wsl2-ram-overload-om-host-draftmd/:2:0","tags":null,"title":"Wsl2 Ram Overload Om Host (draft)","uri":"/posts/development/wsl2-ram-overload-om-host-draftmd/#editing-wslconf"},{"categories":["Development"],"collections":null,"content":"Conclusion Dealing with RAM overload in WSL 2 on the host machine can greatly enhance your overall system performance and stability. By editing the wsl.conf configuration file and setting memory and processor limits, you can ensure that WSL 2 doesn\u0026rsquo;t consume excessive resources. This allows you to enjoy the benefits of WSL 2 without compromising the performance of other applications on your host system. ","date":"08-12-2020","objectID":"/posts/development/wsl2-ram-overload-om-host-draftmd/:3:0","tags":null,"title":"Wsl2 Ram Overload Om Host (draft)","uri":"/posts/development/wsl2-ram-overload-om-host-draftmd/#conclusion"},{"categories":["Development"],"collections":null,"content":"you want a set of Bash commands for managing processes in a Linux environment. Here\u0026rsquo;s an explanation of what each command does: Kill Child Process and Parent Process: pid=17844 \u0026amp;\u0026amp; pkill -TERM -P $pid \u0026amp;\u0026amp; kill $pid This command first assigns the value 17844 to the variable pid. Then, it uses pkill with the -TERM option to send a termination signal to all processes with a parent process ID (PPID) equal to the value stored in pid. Finally, it uses the kill command to send a termination signal to the process with the PID stored in pid. Print PID from Command: ps aux | grep \u0026#34;/root/bin/ssh-port-forward 45.77.47.134 12006\u0026#34; | grep -v grep | awk \u0026#39;{print $2}\u0026#39; This command lists all processes using ps aux, searches for lines containing \u0026ldquo;/root/bin/ssh-port-forward 45.77.47.134 12006\u0026rdquo; using grep, excludes the grep command itself using grep -v grep, and then uses awk to print the second column of the output, which is the PID of the matching process. Kill Child Process and Parent Process By Command: PC=\u0026#34;/root/bin/ssh-port-forward 45.77.47.134 12006\u0026#34; \u0026amp;\u0026amp; PID=$(ps aux | grep \u0026#34;$PC\u0026#34; | grep -v grep | awk \u0026#39;{print $2}\u0026#39;) \u0026amp;\u0026amp; pkill -TERM -P $PID \u0026amp;\u0026amp; kill $PID This command first assigns the command string \u0026ldquo;/root/bin/ssh-port-forward 45.77.47.134 12006\u0026rdquo; to the variable PC. Then, it uses a combination of commands to find the PID of the process that matches the command stored in PC, and it sends a termination signal to both the child and parent processes. Kill and Run Background Process: PC=\u0026#34;/root/bin/ssh-port-forward 45.77.47.134 12006\u0026#34; \u0026amp;\u0026amp; PID=$(ps aux | grep \u0026#34;$PC\u0026#34; | grep -v grep | awk \u0026#39;{print $2}\u0026#39;) \u0026amp;\u0026amp; pkill -TERM -P $PID ; kill $PID ; $PC \u0026amp; This command is similar to the previous one but with an additional step. It first finds the PID of the process matching the command in PC, sends termination signals to both the child and parent processes, and then restarts the command in the background using \u0026amp;. This effectively stops the existing process and starts a new one in the background. Please note that working with process management in this way can be risky, especially if you are forcefully terminating processes. Be cautious when using these commands, as they can lead to data loss or other unexpected behavior if not used carefully. ","date":"08-12-2020","objectID":"/posts/development/kill-child-process-and-parent-process-bash-linux/:0:0","tags":null,"title":"Kill Child Process and Parent Process Bash Linux","uri":"/posts/development/kill-child-process-and-parent-process-bash-linux/#"},{"categories":["Development"],"collections":null,"content":"Indentation settings are crucial for maintaining consistent and readable code. In Vim, you can easily modify indentation preferences using various commands. Here\u0026rsquo;s a breakdown of your provided commands and their effects: ","date":"08-12-2020","objectID":"/posts/development/modifying-indentation-in-vim/:0:0","tags":null,"title":"Modifying Indentation in Vim","uri":"/posts/development/modifying-indentation-in-vim/#"},{"categories":["Development"],"collections":null,"content":"Converting Tabs to Spaces To replace tabs with spaces, follow these steps: Set the tab width and shift width to 2 spaces: :set tabstop=2 :set shiftwidth=2 Enable the expandtab option to replace tabs with spaces: :set expandtab Perform a retab to apply the changes: :retab ","date":"08-12-2020","objectID":"/posts/development/modifying-indentation-in-vim/:0:1","tags":null,"title":"Modifying Indentation in Vim","uri":"/posts/development/modifying-indentation-in-vim/#converting-tabs-to-spaces"},{"categories":["Development"],"collections":null,"content":"Converting Spaces to Tabs To change spaces to tabs, adhere to these instructions: Adjust the tabstop to 8 to match your desired tab width: :set tabstop=8 Disable expandtab to ensure tabs are used instead of spaces: :set noexpandtab Use the %retab! command to reformat the entire file using tabs: :%retab! ","date":"08-12-2020","objectID":"/posts/development/modifying-indentation-in-vim/:0:2","tags":null,"title":"Modifying Indentation in Vim","uri":"/posts/development/modifying-indentation-in-vim/#converting-spaces-to-tabs"},{"categories":["Development"],"collections":null,"content":"Displaying Tab Characters and End of Line For better visualization of tab characters and end-of-line markers, employ the following configuration: Create a non-normal mode mapping to set the display preferences: :nnoremap \u0026lt;leader\u0026gt;l :setlocal lcs=tab:\u0026gt;-,trail:-,eol:$ list! list?\u0026lt;CR\u0026gt; Here, \u0026lt;leader\u0026gt; is a placeholder for your leader key (usually backslash), and \u0026lt;CR\u0026gt; represents the Enter key. After setting up this mapping, you can press \u0026lt;leader\u0026gt;l in normal mode to toggle the display of tab characters, trailing spaces, and end-of-line markers in the current buffer. Remember that you can customize these commands to match your preferred coding style. With these Vim commands, you\u0026rsquo;ll be able to efficiently manage indentation settings and maintain code consistency. ","date":"08-12-2020","objectID":"/posts/development/modifying-indentation-in-vim/:0:3","tags":null,"title":"Modifying Indentation in Vim","uri":"/posts/development/modifying-indentation-in-vim/#displaying-tab-characters-and-end-of-line"},{"categories":["Development"],"collections":null,"content":"In Linux, you can manage processes in the background using various commands and keyboard shortcuts. This article will walk you through how to list, stop, start, and bring background processes to the foreground, as well as how to kill running processes. ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:0:0","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#"},{"categories":["Development"],"collections":null,"content":"Listing Processes To list the processes running on your system, you can use the ps command. Here\u0026rsquo;s the basic syntax: ps This will display a list of processes along with their respective process IDs (PIDs), terminal IDs, and other information. ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:1:0","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#listing-processes"},{"categories":["Development"],"collections":null,"content":"Managing Background Processes ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:0","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#managing-background-processes"},{"categories":["Development"],"collections":null,"content":"Starting a Background Process To start a process in the background, you can simply append an ampersand (\u0026amp;) to the command. For example, to run a script called myscript.sh in the background: ./myscript.sh \u0026amp; ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:1","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#starting-a-background-process"},{"categories":["Development"],"collections":null,"content":"Viewing Background Jobs You can view the currently running background jobs using the jobs command: jobs This will display a list of background jobs along with their job numbers. ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:2","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#viewing-background-jobs"},{"categories":["Development"],"collections":null,"content":"Stopping a Process You can stop a running process and move it to the background by pressing Ctrl + Z. This will suspend the process and give you back control of the terminal. ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:3","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#stopping-a-process"},{"categories":["Development"],"collections":null,"content":"Resuming a Background Process To resume a background process and bring it to the foreground, you can use the fg command followed by the job number. For example, to bring job number 1 to the foreground: fg %1 ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:4","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#resuming-a-background-process"},{"categories":["Development"],"collections":null,"content":"Backgrounding a Suspended Process If you have a suspended process and want to send it to the background, you can use the bg command followed by the job number. For example, to background job number 1: bg %1 ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:5","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#backgrounding-a-suspended-process"},{"categories":["Development"],"collections":null,"content":"Killing a Process To terminate a running process, you can use the kill command followed by the PID of the process you want to kill. For example, to kill a process with PID 12345: kill 12345 If you have a background job and want to kill it, you can use the kill command with the job number. For example, to kill job number 2: kill %2 Remember that the kill command sends a signal to the process, and different signals can have different effects on the process. The default signal sent by kill is SIGTERM, which asks the process to terminate gracefully. If a process doesn\u0026rsquo;t respond to SIGTERM, you can send a more forceful SIGKILL signal: kill -9 12345 # Sending SIGKILL to process with PID 12345 These are the basic commands and shortcuts for managing background processes in a Linux Bash terminal. With these, you can effectively list, control, and manipulate processes to suit your needs. ","date":"08-12-2020","objectID":"/posts/development/working-with-background-processes-in-linux-bash/:2:6","tags":null,"title":"Working with Background Processes in Linux Bash","uri":"/posts/development/working-with-background-processes-in-linux-bash/#killing-a-process"},{"categories":["Development"],"collections":null,"content":"When working in a Unix-like environment, timing the execution of shell commands is a useful way to measure the performance of various operations. Bash provides a couple of approaches for timing shell commands: using the /usr/bin/time command or setting the TIMEFORMAT variable. ","date":"04-12-2020","objectID":"/posts/development/timing-shell-commands-in-bash/:0:0","tags":null,"title":"Timing Shell Commands in Bash","uri":"/posts/development/timing-shell-commands-in-bash/#"},{"categories":["Development"],"collections":null,"content":"Using /usr/bin/time Command The /usr/bin/time command is a versatile utility that can provide information about the resources used by a process, including the execution time. To time a command using /usr/bin/time, you can use the following syntax: /usr/bin/time -f \u0026#39;%E\u0026#39; \u0026lt;command\u0026gt; In this case, \u0026lt;command\u0026gt; should be replaced with the actual command you want to time. For example, to time the execution of the sleep command for 5 seconds, you would run: /usr/bin/time -f \u0026#39;%E\u0026#39; sleep 5 The -f '%E' flag specifies the format in which the output should be displayed. %E represents the elapsed time in a human-readable format (hours:minutes:seconds). ","date":"04-12-2020","objectID":"/posts/development/timing-shell-commands-in-bash/:0:1","tags":null,"title":"Timing Shell Commands in Bash","uri":"/posts/development/timing-shell-commands-in-bash/#using-usrbintime-command"},{"categories":["Development"],"collections":null,"content":"Using TIMEFORMAT Variable Bash also provides the TIMEFORMAT variable, which you can use to customize the output format when timing commands. The syntax is as follows: TIMEFORMAT=\u0026#39;\u0026lt;format\u0026gt;\u0026#39; time \u0026lt;command\u0026gt; Here, \u0026lt;format\u0026gt; should be replaced with the desired output format, and \u0026lt;command\u0026gt; should be replaced with the command you want to time. For instance, to time the sleep command for 5 seconds and display a custom message along with the execution time, you could use: TIMEFORMAT=\u0026#39;It takes %R seconds to complete this task...\u0026#39; time sleep 5 In this example, %R is a placeholder that will be replaced with the actual execution time in seconds. ","date":"04-12-2020","objectID":"/posts/development/timing-shell-commands-in-bash/:0:2","tags":null,"title":"Timing Shell Commands in Bash","uri":"/posts/development/timing-shell-commands-in-bash/#using-timeformat-variable"},{"categories":["Development"],"collections":null,"content":"Conclusion Both /usr/bin/time and the TIMEFORMAT variable offer convenient ways to measure the execution time of shell commands. Choose the method that best suits your needs and preferences. Keep in mind that these approaches are particularly useful for benchmarking and performance analysis when dealing with various tasks in a Unix-like environment. ","date":"04-12-2020","objectID":"/posts/development/timing-shell-commands-in-bash/:0:3","tags":null,"title":"Timing Shell Commands in Bash","uri":"/posts/development/timing-shell-commands-in-bash/#conclusion"},{"categories":["Development"],"collections":null,"content":"To configure unattended upgrades for Docker on a Debian-based system, you can add the following line to the 50unattended-upgrades file. This will ensure that Docker packages are automatically updated when unattended-upgrades runs. Here are the steps to do this: Open a terminal on your Debian-based system. Use a text editor to open the 50unattended-upgrades file for editing. You\u0026rsquo;ll typically find this file in the /etc/apt/apt.conf.d/ directory. sudo nano /etc/apt/apt.conf.d/50unattended-upgrades Add the following line to the file, which specifies that Docker packages should be automatically upgraded: Unattended-Upgrade::Allowed-Origins { \u0026#34;${distro_id}:${distro_codename}-security\u0026#34;; \u0026#34;${distro_id}:${distro_codename}-updates\u0026#34;; \u0026#34;Docker:${distro_codename}\u0026#34;; }; This configuration tells unattended-upgrades to include the Docker repository for automatic updates, in addition to the security and regular updates repositories for your distribution. Save the file by pressing Ctrl + O, then press Enter. Exit the text editor by pressing Ctrl + X. To make sure your changes take effect, run the following command: sudo unattended-upgrades --dry-run --debug This command will simulate an unattended upgrade run and display any errors or issues it encounters. Ensure there are no errors related to your Docker configuration. Now, unattended-upgrades will automatically include Docker packages in its upgrade process based on your specified configuration. Remember that unattended upgrades can automatically update packages, including Docker, without requiring manual intervention. Make sure this aligns with your system\u0026rsquo;s update policy and that you have backups and a way to monitor the system in case issues arise from automatic updates. ","date":"03-12-2020","objectID":"/posts/development/unanttended-upgrade-docker/:0:0","tags":null,"title":"Unanttended Upgrade Docker","uri":"/posts/development/unanttended-upgrade-docker/#"},{"categories":["Development"],"collections":null,"content":"When you install and remove packages on a Debian-based Linux distribution like Debian or Ubuntu, sometimes you end up with orphaned packages - packages that were installed as dependencies for other software but are no longer needed. These orphaned packages can take up disk space and clutter your system. One tool that can help you identify and remove these orphaned packages is Deborphan. ","date":"02-12-2020","objectID":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/:0:0","tags":null,"title":"Removing Orphaned Packages on Debian/Ubuntu with Deborphan","uri":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/#"},{"categories":["Development"],"collections":null,"content":"What is Deborphan? Deborphan is a command-line tool for identifying and removing orphaned packages on Debian-based systems. It analyzes the package dependencies and checks which packages are no longer required by any other installed packages. Once it identifies these orphaned packages, you can choose to remove them, freeing up disk space and keeping your system clean. ","date":"02-12-2020","objectID":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/:1:0","tags":null,"title":"Removing Orphaned Packages on Debian/Ubuntu with Deborphan","uri":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/#what-is-deborphan"},{"categories":["Development"],"collections":null,"content":"Installing Deborphan Before you can use Deborphan, you need to install it. You can do this using the package manager on your system. Open a terminal and run: sudo apt-get update sudo apt-get install deborphan ","date":"02-12-2020","objectID":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/:2:0","tags":null,"title":"Removing Orphaned Packages on Debian/Ubuntu with Deborphan","uri":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/#installing-deborphan"},{"categories":["Development"],"collections":null,"content":"Using Deborphan Once Deborphan is installed, you can start using it to identify and remove orphaned packages. Here are the basic commands: Identify Orphaned Packages: To identify orphaned packages on your system, open a terminal and run: sudo deborphan Deborphan will list all the orphaned packages it finds. Remove Orphaned Packages: To remove the orphaned packages found by Deborphan, you can use the apt-get command. Be careful with this step, as it will permanently remove these packages from your system: sudo apt-get remove --purge $(deborphan) This command will remove all the orphaned packages listed by Deborphan. Remove Configuration Files (optional): If you want to remove the configuration files associated with the removed packages, you can run the following command: sudo apt-get purge $(deborphan --showconfig) This will remove the configuration files for the orphaned packages. ","date":"02-12-2020","objectID":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/:3:0","tags":null,"title":"Removing Orphaned Packages on Debian/Ubuntu with Deborphan","uri":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/#using-deborphan"},{"categories":["Development"],"collections":null,"content":"Cleaning Up After running the above commands, your system should be free of orphaned packages, and you should have reclaimed some disk space. It\u0026rsquo;s a good practice to periodically run Deborphan to keep your system clean and efficient. Remember to exercise caution when removing packages, especially if you\u0026rsquo;re not sure what a package does. Always review the list of packages that Deborphan suggests for removal to ensure that you\u0026rsquo;re not removing anything critical to your system\u0026rsquo;s functionality. Deborphan is a handy tool for maintaining a clean and tidy Debian-based Linux system, and it can help you keep your system running smoothly. ","date":"02-12-2020","objectID":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/:4:0","tags":null,"title":"Removing Orphaned Packages on Debian/Ubuntu with Deborphan","uri":"/posts/development/removing-orphaned-packages-on-debianubuntu-with-deborphan/#cleaning-up"},{"categories":["Development"],"collections":null,"content":"If you want to set the default user in Windows Subsystem for Linux (WSL) to \u0026ldquo;root\u0026rdquo; instead of your regular user, you can achieve this using PowerShell or by modifying the WSL configuration file. Please note that running WSL as the root user is generally not recommended for security reasons, as it can expose your system to potential risks. Proceed with caution and only if you have a valid reason for doing so. Here\u0026rsquo;s how you can enable the root user as the default user in WSL using PowerShell and by modifying the configuration file: Using PowerShell: Open PowerShell: Press Win + X and select \u0026ldquo;Windows PowerShell\u0026rdquo; or \u0026ldquo;Windows PowerShell (Admin)\u0026rdquo;. Set Root as Default User: Run the following command to set the default user to root for a specific WSL distribution (replace \u0026ldquo;ubuntu2004\u0026rdquo; with the actual name of your distribution): wsl --set-default-user root Start WSL: Launch your desired WSL distribution. It should now start with the root user as default. Using WSL Configuration File: Locate Configuration File: The WSL configuration file can be found at C:\\Users\\\u0026lt;YourUsername\u0026gt;\\AppData\\Local\\Packages\\\u0026lt;DistroPackageName\u0026gt;\\LocalState\\rootfs\\etc\\wsl.conf. Replace \u0026lt;YourUsername\u0026gt; with your Windows username and \u0026lt;DistroPackageName\u0026gt; with the package name of your WSL distribution (e.g., for Ubuntu 20.04, the package name might include \u0026ldquo;CanonicalGroupLimited.Ubuntu20.04onWindows\u0026rdquo;). Edit Configuration File: Open the wsl.conf file in a text editor (e.g., Notepad). Add Default User Setting: Add the following lines to the wsl.conf file to set the default user as root for that distribution: [user] default=root Save the File: Save the changes and close the text editor. Restart WSL: Restart your computer or execute the following command in PowerShell to terminate all running WSL instances: wsl --shutdown After completing these steps, your chosen WSL distribution should start with the root user as the default user. Remember that running WSL as the root user might lead to unintended consequences, and it\u0026rsquo;s important to exercise caution and be aware of the potential security risks involved. ","date":"24-11-2020","objectID":"/posts/development/enabling-root-as-default-user-in-wsl-on-windows-10/:0:0","tags":null,"title":"Enabling Root as Default User in WSL on Windows 10","uri":"/posts/development/enabling-root-as-default-user-in-wsl-on-windows-10/#"},{"categories":["Development"],"collections":null,"content":"In Git, patches are a way to capture and apply changes made to a codebase. They can be useful for sharing changes between developers or for applying changes across different branches. In this article, we\u0026rsquo;ll explore how to create and apply patches in Git. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:0:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#"},{"categories":["Development"],"collections":null,"content":"Creating Patches ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:1:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#creating-patches"},{"categories":["Development"],"collections":null,"content":"Patching Non-Staged Files To create a patch for changes that have not been staged yet, you can use the following command: git diff \u0026gt; file.patch This command generates a patch named file.patch containing the differences between your working directory and the last commit. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:1:1","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#patching-non-staged-files"},{"categories":["Development"],"collections":null,"content":"Patching Staged Files To create a patch for changes that have been staged (added to the index) but not yet committed, you can use the following command: git diff --cached \u0026gt; file.patch This command generates a patch named file.patch containing the differences between the staged changes and the last commit. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:1:2","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#patching-staged-files"},{"categories":["Development"],"collections":null,"content":"Patching Staged Binary Files If you have binary files staged and you want to create a patch for them, you can use the following command: git diff --cached --binary \u0026gt; file.patch This command generates a binary patch named file.patch for staged binary files. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:1:3","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#patching-staged-binary-files"},{"categories":["Development"],"collections":null,"content":"Applying Patches Once you have a patch file, you can apply it to a codebase using the git apply command. Here\u0026rsquo;s how: git apply file.patch This command applies the changes from the file.patch to your working directory. However, it\u0026rsquo;s important to note that git apply only applies the changes to your working directory; the changes won\u0026rsquo;t be staged or committed automatically. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:2:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#applying-patches"},{"categories":["Development"],"collections":null,"content":"Using Patches for Collaboration Patches can be a useful way to collaborate with other developers who might not have direct access to your codebase. You can share the patch file with them, and they can apply it to their own repositories using git apply. Keep in mind that patches might not always apply cleanly, especially if there have been other changes to the codebase in the meantime. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:3:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#using-patches-for-collaboration"},{"categories":["Development"],"collections":null,"content":"Creating Patch Files with Commit Information By default, the patch files created using the above commands include only the diff information without any context about the commit. If you want to include commit information in the patch, you can use the git format-patch command. This command generates patch files for each commit in a specified range, including commit messages and author information. git format-patch origin/master..my-branch Replace origin/master with the starting commit and my-branch with the ending commit or branch name. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:4:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#creating-patch-files-with-commit-information"},{"categories":["Development"],"collections":null,"content":"Conclusion Patches are a powerful way to capture and share changes in a Git repository. They allow you to apply changes between different branches, share work with others, and keep a record of modifications. By using the commands mentioned in this article, you can easily create and apply patches in your Git workflow. Just remember that patches might not always apply seamlessly, so it\u0026rsquo;s important to review and resolve any conflicts that arise during the patching process. ","date":"19-11-2020","objectID":"/posts/development/git-patch-applying-and-creating-patches-in-git/:5:0","tags":null,"title":"Git Patch Applying and Creating Patches in Git","uri":"/posts/development/git-patch-applying-and-creating-patches-in-git/#conclusion"},{"categories":["Development"],"collections":null,"content":"Vim is a powerful text editor that offers various functionalities for searching and replacing strings within your text. Here are different ways to perform string replacements using Vim\u0026rsquo;s command mode: ","date":"19-11-2020","objectID":"/posts/development/vim-search-and-replace-string/:0:0","tags":null,"title":"Vim Search and Replace String","uri":"/posts/development/vim-search-and-replace-string/#"},{"categories":["Development"],"collections":null,"content":"Replace String on Selected Line To replace a string on the currently selected line, you can use the :s command. For instance, to replace all occurrences of \u0026ldquo;foo\u0026rdquo; with \u0026ldquo;bar\u0026rdquo; on the selected line, you can use the following command: :s/foo/bar/g This will replace all occurrences of \u0026ldquo;foo\u0026rdquo; with \u0026ldquo;bar\u0026rdquo; on the selected line. ","date":"19-11-2020","objectID":"/posts/development/vim-search-and-replace-string/:0:1","tags":null,"title":"Vim Search and Replace String","uri":"/posts/development/vim-search-and-replace-string/#replace-string-on-selected-line"},{"categories":["Development"],"collections":null,"content":"Replace String on All Lines To replace a string on all lines in the file, you can use the % symbol with the :s command. For example, to replace all occurrences of \u0026ldquo;foo\u0026rdquo; with \u0026ldquo;bar\u0026rdquo; on all lines, you can use the following command: :%s/foo/bar/g This command will replace \u0026ldquo;foo\u0026rdquo; with \u0026ldquo;bar\u0026rdquo; on every line in the file. ","date":"19-11-2020","objectID":"/posts/development/vim-search-and-replace-string/:0:2","tags":null,"title":"Vim Search and Replace String","uri":"/posts/development/vim-search-and-replace-string/#replace-string-on-all-lines"},{"categories":["Development"],"collections":null,"content":"Replace String on All Lines With Prompt If you want to be prompted for confirmation before each replacement on all lines, you can add the c flag to the :s command. This will make Vim ask you whether to replace the string on each occurrence. Here\u0026rsquo;s how you can do it: :%s/foo/bar/gc When you execute this command, Vim will display each occurrence of \u0026ldquo;foo\u0026rdquo; on every line and ask if you want to replace it with \u0026ldquo;bar\u0026rdquo;. You can choose \u0026lsquo;y\u0026rsquo; to replace, \u0026rsquo;n\u0026rsquo; to skip, \u0026lsquo;a\u0026rsquo; to replace all remaining occurrences, or \u0026lsquo;q\u0026rsquo; to quit the replacement process. Remember that Vim has various modes (normal, insert, visual, etc.), and these commands are used in the normal mode. Make sure you are in the appropriate mode before using these commands. Keep in mind that Vim has a learning curve, so it might take some time to get used to its features. ","date":"19-11-2020","objectID":"/posts/development/vim-search-and-replace-string/:0:3","tags":null,"title":"Vim Search and Replace String","uri":"/posts/development/vim-search-and-replace-string/#replace-string-on-all-lines-with-prompt"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the process of setting up your own Apple Notes server using Docker and the tvial/docker-mailserver image. Please note that this setup will only provide an IMAP server, which can be used with Apple Notes for syncing your notes across devices. Let\u0026rsquo;s get started! ","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:0:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following prerequisites: A server or a cloud-based virtual machine running a Linux distribution (e.g., Ubuntu, Debian). Docker and Docker Compose installed on your server. You can install Docker by following the official documentation. ","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:1:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Prepare Your Server Log in to your server as a user with sudo privileges. ","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:2:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#step-1-prepare-your-server"},{"categories":["Development"],"collections":null,"content":"Step 2: Create a Docker Compose File Create a directory for your Docker Compose files and navigate to it: mkdir docker-notes-server cd docker-notes-server Now, create a docker-compose.yml file using your favorite text editor (e.g., nano or vim): ```yaml version: \u0026#39;3\u0026#39; services: imap-server: image: tvial/docker-mailserver:latest container_name: imap-server environment: - SSL_TYPE=none - DISABLE_CLAMAV=y - DISABLE_SPAMASSASSIN=y volumes: - /data/mail:/var/mail - /data/state:/var/mail-state - /data/overrides:/etc/mail/overrides ports: - \u0026#34;143:143\u0026#34; - \u0026#34;993:993\u0026#34; restart: always This Compose file sets up an IMAP server using the `tvial/docker-mailserver` image. It disables SSL, ClamAV, and SpamAssassin for simplicity, but you can enable these features if desired. Make sure to customize the volume paths according to your server\u0026#39;s configuration. ## Step 3: Create Data Directories Create the directories on your server to store the mail data and state: ```bash mkdir -p /data/mail mkdir -p /data/state mkdir -p /data/overrides","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:3:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#step-2-create-a-docker-compose-file"},{"categories":["Development"],"collections":null,"content":"Step 4: Start the IMAP Server Now, you can start the Apple Notes IMAP server using Docker Compose: docker-compose up -d The -d flag runs the containers in the background. ","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:4:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#step-4-start-the-imap-server"},{"categories":["Development"],"collections":null,"content":"Step 5: Configure Apple Notes With the IMAP server up and running, you can configure your Apple Notes app to use it for syncing your notes: Open the \u0026ldquo;Settings\u0026rdquo; app on your iOS or macOS device. Scroll down and select \u0026ldquo;Notes.\u0026rdquo; Under the \u0026ldquo;Accounts\u0026rdquo; section, tap \u0026ldquo;Add Account.\u0026rdquo; Choose \u0026ldquo;Other Account\u0026rdquo; and then select \u0026ldquo;Add Mail Account.\u0026rdquo; Enter your Name, Email (use the username you created on the server), Password, and Description. Tap \u0026ldquo;Next\u0026rdquo; and wait for your device to verify the account settings. Once verified, you can choose to sync Notes. Ensure that the Notes option is enabled. Tap \u0026ldquo;Save\u0026rdquo; to finish the setup. Your Apple Notes should now sync with your own IMAP server. That\u0026rsquo;s it! You have successfully set up your own Apple Notes server using Docker and tvial/docker-mailserver. You can now enjoy syncing your notes securely across your Apple devices. ","date":"10-10-2020","objectID":"/posts/development/setting-up-your-own-apple-notes-server/:5:0","tags":null,"title":"Setting Up Your Own Apple Notes Server","uri":"/posts/development/setting-up-your-own-apple-notes-server/#step-5-configure-apple-notes"},{"categories":["Software"],"collections":null,"content":"One common issue that users encounter when using the OceanWP theme in Wordpress is the visibility of the focus border after clicking on a link. This focus border can be distracting and interfere with the overall design of the website. Fortunately, this problem can be easily resolved with a simple CSS tweak. ","date":"06-10-2020","objectID":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/:0:0","tags":["wordpress"],"title":"Fix Wordpress OceanWP Focus Border Visible After Click","uri":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/#"},{"categories":["Software"],"collections":null,"content":"The Problem By default, when you click on a link in the OceanWP theme, a focus border is applied to indicate that the link is currently active. However, this focus border can remain visible even after the click, which is not desired for most websites. ","date":"06-10-2020","objectID":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/:1:0","tags":["wordpress"],"title":"Fix Wordpress OceanWP Focus Border Visible After Click","uri":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/#the-problem"},{"categories":["Software"],"collections":null,"content":"The Solution To fix this issue and ensure that the focus border disappears after clicking on a link, you can use the following CSS code snippet: a:focus { outline: none !important; } By applying this code, you remove the outline property from the link when it receives focus, effectively hiding the focus border. ","date":"06-10-2020","objectID":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/:2:0","tags":["wordpress"],"title":"Fix Wordpress OceanWP Focus Border Visible After Click","uri":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/#the-solution"},{"categories":["Software"],"collections":null,"content":"Implementation To implement this solution, follow these steps: Log in to your Wordpress admin panel. Navigate to the \u0026ldquo;Appearance\u0026rdquo; section and click on \u0026ldquo;Customize\u0026rdquo; to open the theme customization options. In the customization sidebar, locate and click on \u0026ldquo;Additional CSS\u0026rdquo; to open the CSS editor. Copy the CSS code snippet provided above and paste it into the CSS editor. Click on the \u0026ldquo;Publish\u0026rdquo; button to save your changes. Once you have completed these steps, the focus border will no longer be visible after clicking on a link within your OceanWP-powered Wordpress website. ","date":"06-10-2020","objectID":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/:3:0","tags":["wordpress"],"title":"Fix Wordpress OceanWP Focus Border Visible After Click","uri":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/#implementation"},{"categories":["Software"],"collections":null,"content":"Conclusion By applying the CSS code snippet mentioned above, you can easily resolve the issue of the focus border remaining visible after clicking on a link in the OceanWP theme. This simple solution enhances the user experience and ensures a cleaner design for your website. ","date":"06-10-2020","objectID":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/:4:0","tags":["wordpress"],"title":"Fix Wordpress OceanWP Focus Border Visible After Click","uri":"/posts/software/fix-wordpress-oceanwp-focus-border-visible-after-click/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"Fail2ban is a popular intrusion prevention tool designed to protect servers from brute-force attacks and other malicious activities by monitoring log files and taking proactive measures to block suspicious IP addresses. When running Fail2ban inside a Docker container, there are some additional considerations to ensure proper functionality. In this article, we will explore how to configure and run Fail2ban inside a Docker container, including the need for the NET_ADMIN capability. ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:0:0","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#"},{"categories":["DevOps"],"collections":null,"content":"Prerequisites Before proceeding, ensure you have the following: Docker installed on your host machine. Basic knowledge of Docker and its concepts. ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:1:0","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#prerequisites"},{"categories":["DevOps"],"collections":null,"content":"Docker Setup To run Fail2ban inside a Docker container with the necessary NET_ADMIN capability, follow these steps: ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:0","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#docker-setup"},{"categories":["DevOps"],"collections":null,"content":"1. Create the Fail2ban Configuration Create a directory on your host machine to store the Fail2ban configuration files. Inside this directory, create the necessary configuration files, such as jail.local and jail.d/your-custom-jail.local to define your desired rules and actions. Sample jail.local: [DEFAULT] bantime = 3600 findtime = 600 maxretry = 5 [sshd] enabled = true port = ssh logpath = %(sshd_log)s backend = %(sshd_backend)s ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:1","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#1-create-the-fail2ban-configuration"},{"categories":["DevOps"],"collections":null,"content":"2. Create Dockerfile Next, create a Dockerfile in the same directory as your Fail2ban configuration files. This file will be used to build the Fail2ban Docker image with the required configurations. # Use an official Fail2ban base image FROM fail2ban/fail2ban:latest # Copy the Fail2ban configurations to the container COPY jail.local /etc/fail2ban/ COPY jail.d/your-custom-jail.local /etc/fail2ban/jail.d/ ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:2","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#2-create-dockerfile"},{"categories":["DevOps"],"collections":null,"content":"3. Build the Docker Image Open a terminal, navigate to the directory containing your Dockerfile, and build the Docker image using the following command: docker build -t my_fail2ban_image . ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:3","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#3-build-the-docker-image"},{"categories":["DevOps"],"collections":null,"content":"4. Run the Fail2ban Container Now that you have the Docker image, it\u0026rsquo;s time to run the Fail2ban container. When running Fail2ban inside Docker, we need to grant the container the NET_ADMIN capability. This capability allows the container to manipulate network settings and iptables, which is necessary for Fail2ban to function effectively. docker run -d \\ --name my_fail2ban_container \\ --cap-add=NET_ADMIN \\ -v /path/to/your/fail2ban/configs:/etc/fail2ban \\ my_fail2ban_image The --name flag sets a custom name for the container (in this case, my_fail2ban_container). The --cap-add=NET_ADMIN flag grants the NET_ADMIN capability to the container. The -v flag mounts the host directory containing the Fail2ban configurations into the container at the appropriate path. ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:4","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#4-run-the-fail2ban-container"},{"categories":["DevOps"],"collections":null,"content":"5. Verify Fail2ban Functionality To verify that Fail2ban is running correctly inside the Docker container, you can check its logs: docker logs my_fail2ban_container Additionally, you can access the running container\u0026rsquo;s shell to interact with Fail2ban inside the container: docker exec -it my_fail2ban_container /bin/bash ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:2:5","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#5-verify-fail2ban-functionality"},{"categories":["DevOps"],"collections":null,"content":"Conclusion By following the steps outlined in this article, you can effectively set up and run Fail2ban inside a Docker container with the necessary NET_ADMIN capability. This ensures that Fail2ban has the required permissions to monitor log files and manipulate network settings, providing an additional layer of security for your server. Remember that Fail2ban is just one component of a comprehensive security strategy. Always keep your software and systems up to date, implement strong authentication measures, and regularly monitor and review your security configurations to maintain a robust and secure environment. ","date":"05-10-2020","objectID":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/:3:0","tags":["docker"],"title":"Using Fail2ban Inside Docker with NET_ADMIN Capability","uri":"/posts/devops/using-fail2ban-inside-docker-with-net-admin-capability/#conclusion"},{"categories":["DevOps"],"collections":null,"content":"If you encounter the \u0026ldquo;Too many open files in system\u0026rdquo; error in your Apache Docker container, it means that the system has reached the limit on the number of files it can open, and this is causing issues with Apache\u0026rsquo;s configuration. Here\u0026rsquo;s a step-by-step guide on how to resolve this problem. ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:0:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#"},{"categories":["DevOps"],"collections":null,"content":"1. Check the Current File Limit First, you need to check the current file limit on your system to understand the magnitude of the issue. To do this, run the following command: cat /proc/sys/fs/file-max ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:1:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#1-check-the-current-file-limit"},{"categories":["DevOps"],"collections":null,"content":"2. Temporarily Increase File Limit You can temporarily increase the file limit for your Apache Docker container by executing the following command: sysctl -w fs.file-max=500000 This command sets the maximum number of files that the system can open to 500,000. This should be enough for most use cases, but you can adjust the value if needed. ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:2:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#2-temporarily-increase-file-limit"},{"categories":["DevOps"],"collections":null,"content":"3. Verify the Changes After executing the above command, you should verify that the changes have taken effect. You can do this by running the cat command again: cat /proc/sys/fs/file-max ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:3:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#3-verify-the-changes"},{"categories":["DevOps"],"collections":null,"content":"4. Make Changes Permanent To make the changes permanent, you need to modify the sysctl configuration file. Here\u0026rsquo;s how you can do it: vi /etc/sysctl.conf Add the following line to the file: fs.file-max=500000 Save and close the file. ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:4:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#4-make-changes-permanent"},{"categories":["DevOps"],"collections":null,"content":"5. Restart the Docker Container Now that you have made the necessary changes, you should restart your Apache Docker container for the changes to take effect. You can do this using Docker commands, such as: docker restart \u0026lt;container_name\u0026gt; Replace \u0026lt;container_name\u0026gt; with the actual name or ID of your Apache Docker container. After completing these steps, your Apache Docker container should now be running without the \u0026ldquo;Too many open files in system\u0026rdquo; error, and the file limit should remain increased even after container restarts. Remember that setting the file limit too high may have an impact on system resources, so it\u0026rsquo;s essential to find an appropriate value that suits your specific needs. The value of 500,000 used in this guide is just an example and may not be suitable for all environments. Always monitor your system\u0026rsquo;s resource usage and adjust the file limit accordingly. ","date":"29-09-2020","objectID":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/:5:0","tags":["docker","apache"],"title":"Fixing Too many open files in system Error in Apache Docker Container","uri":"/posts/devops/fixing-too-many-open-files-in-system-error-in-apache-docker-container/#5-restart-the-docker-container"},{"categories":["Software"],"collections":null,"content":"To add a One-Click Chat to Order feature and send checkout details via WhatsApp, including the shipping method, you can make modifications to the \u0026ldquo;wa_button.php\u0026rdquo; file in your WordPress theme or plugin. Here\u0026rsquo;s a step-by-step guide on how to achieve this: Open the \u0026ldquo;wa_button.php\u0026rdquo; file and locate the function where the WhatsApp message is being constructed. It might look something like this: function create_whatsapp_message($order) { // Existing code for creating the WhatsApp message $message = \u0026#34;Hello, thank you for your order!\\r\\n\u0026#34;; // ... // More existing code // ... $date = $order-\u0026gt;get_date_created()-\u0026gt;format (\u0026#39;F j, Y - g:i A\u0026#39;); // Add the shipping method information $shipping_method = \u0026#34;*Metode Pengiriman:*\\r\\n\u0026#34;.$order-\u0026gt;get_shipping_method().\u0026#34; \u0026#34;.$order-\u0026gt;get_shipping_total().\u0026#34;\\r\\n\u0026#34;; // Final output of the message $message .= \u0026#34;\\r\\n\u0026#34;.$total_price.\u0026#34;\\r\\n\u0026#34;.$shipping_method.\u0026#34;\\r\\n\u0026#34;.$payment.\u0026#34;\\r\\n*\u0026#34;.$customer.\u0026#34;* \u0026#34;.$address.\u0026#34;\\r\\n\\r\\n\u0026#34;.$thanks_label.\u0026#34;\\r\\n\\r\\n(\u0026#34;.$date.\u0026#34;)\u0026#34;; // Return the message return $message; } Modify the existing code as shown above to include the shipping method information. Save the changes to the \u0026ldquo;wa_button.php\u0026rdquo; file. Once you have made the changes, you need to create a function that will trigger the WhatsApp message when the \u0026ldquo;One-Click Chat to Order\u0026rdquo; button is clicked. This function should call the \u0026ldquo;create_whatsapp_message\u0026rdquo; function and then send the message using WhatsApp. Implement this new function in your WordPress template or plugin where the \u0026ldquo;One-Click Chat to Order\u0026rdquo; button is located. Please note that the above code assumes that you have already retrieved all the necessary variables like $total_price, $payment, $customer, $address, and $thanks_label. Finally, make sure to test the functionality thoroughly to ensure that the WhatsApp message is sent with the correct order details, including the shipping method. ","date":"24-09-2020","objectID":"/posts/software/adding-oneclick-chat-to-order-and-whatsapp-checkout-in-wordpress-with-shipping-method/:0:0","tags":["wordpress"],"title":"Adding OneClick Chat to Order and WhatsApp Checkout in WordPress with Shipping Method","uri":"/posts/software/adding-oneclick-chat-to-order-and-whatsapp-checkout-in-wordpress-with-shipping-method/#"},{"categories":["Development"],"collections":null,"content":"In macOS, the system-wide PATH variable is defined by the contents of the /etc/paths file and the files within the /etc/path.d/ directory. These files determine the order and locations where the system looks for executable files when you run a command in the Terminal. Here\u0026rsquo;s a breakdown of each of these components: /etc/paths: This file contains a list of directories, one per line. These directories are automatically included in the system-wide PATH variable. When you open a Terminal window, the shell reads the /etc/paths file and adds the directories listed in it to the beginning of your PATH variable. You can view the contents of this file by using a text editor or by running the following command in the Terminal: cat /etc/paths /etc/path.d/: The /etc/path.d/ directory contains additional files that contribute to the PATH variable. Each file in this directory represents a directory to be added to the PATH, and the files are processed in alphabetical order. The directories listed in these files are appended to the end of the PATH variable. You can see the contents of the /etc/path.d/ directory by running: ls /etc/path.d/ The purpose of these configuration files is to allow system administrators to manage the system-wide PATH variable centrally. It ensures that specific directories are included in the PATH for all users on the system. When you install new software or command-line tools, they may add their executable directories to the PATH either by modifying these system files or by providing instructions to users to do so manually in their own user-specific shell configuration files (e.g., ~/.bashrc, ~/.zshrc, etc.). By default, macOS includes directories like /usr/bin, /bin, /usr/sbin, and others in the PATH through these configuration files, making it easy to run system commands and installed software without specifying the full path to the executable. Keep in mind that modifying these system-wide configuration files should be done with caution, as it can affect the behavior of all users on the system. Users can also set their own custom PATH variables in their user-specific shell configuration files if they have specific needs. ","date":"24-09-2020","objectID":"/posts/development/mac-os-bash-path-location/:0:0","tags":null,"title":"MAC OS Bash Path Location","uri":"/posts/development/mac-os-bash-path-location/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re running a WooCommerce store and want to ensure that customers can only have one item in their cart at a time, you can achieve this by adding a filter to your functions.php file. This will empty the cart whenever a new item is added, allowing only one product to be present in the cart at any given time. ","date":"11-09-2020","objectID":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/:0:0","tags":null,"title":"Restricting WooCommerce Cart to Only Allow One Item","uri":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Adding the Filter Open your theme\u0026rsquo;s functions.php file and add the following code: // WooCommerce Only Allow One Item on Cart add_filter( \u0026#39;woocommerce_add_to_cart_validation\u0026#39;, \u0026#39;one_cart_item_at_the_time\u0026#39;, 10, 3 ); function one_cart_item_at_the_time( $passed, $product_id, $quantity ) { if ( ! WC()-\u0026gt;cart-\u0026gt;is_empty() ) { WC()-\u0026gt;cart-\u0026gt;empty_cart(); } return $passed; } This code hooks into the woocommerce_add_to_cart_validation filter, which is triggered when a product is added to the cart. The one_cart_item_at_the_time function checks if the cart is not empty. If it\u0026rsquo;s not empty, the function empties the cart before allowing the new item to be added. ","date":"11-09-2020","objectID":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/:0:1","tags":null,"title":"Restricting WooCommerce Cart to Only Allow One Item","uri":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/#step-1-adding-the-filter"},{"categories":["Development"],"collections":null,"content":"Step 2: Providing a Direct Checkout URL To provide a direct checkout URL with a specific product pre-added to the cart, you can use the following URL structure: https://yourdomain.com/checkout/?add-to-cart=[product_id]Replace yourdomain.com with your actual domain and [product_id] with the ID of the product you want to add to the cart. For example, if your local development environment is running on https://localhost:12006 and you want to add a product with ID 123 to the cart, the URL would be: https://localhost:12006/checkout/?add-to-cart=123This URL will take customers directly to the checkout page with the specified product added to their cart. ","date":"11-09-2020","objectID":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/:0:2","tags":null,"title":"Restricting WooCommerce Cart to Only Allow One Item","uri":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/#step-2-providing-a-direct-checkout-url"},{"categories":["Development"],"collections":null,"content":"Conclusion By implementing this code in your functions.php file and using the provided direct checkout URL structure, you can ensure that your WooCommerce store allows only one item in the cart at a time and provides a convenient way for customers to quickly proceed to checkout with their chosen product. Remember to test this implementation thoroughly on a staging or development environment before applying it to your live store to ensure it works as expected. ","date":"11-09-2020","objectID":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/:0:3","tags":null,"title":"Restricting WooCommerce Cart to Only Allow One Item","uri":"/posts/development/restricting-woocommerce-cart-to-only-allow-one-item/#conclusion"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"In this article, we will discuss how to create and mount a limited quota folder using Docker. This can be useful when you want to restrict the amount of disk space a specific folder can use within a Docker container. ","date":"09-09-2020","objectID":"/posts/devops/docker-limit-quota-folder/:0:0","tags":["docker"],"title":"Docker Limit Quota Folder","uri":"/posts/devops/docker-limit-quota-folder/#"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Prerequisites Before we begin, make sure you have Docker installed on your system. You can download and install Docker from the official Docker website (https://www.docker.com/). ","date":"09-09-2020","objectID":"/posts/devops/docker-limit-quota-folder/:1:0","tags":["docker"],"title":"Docker Limit Quota Folder","uri":"/posts/devops/docker-limit-quota-folder/#prerequisites"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Creating a Limited Quota Folder To create a limited quota folder, follow these steps: Create a file with the desired size: $ touch 2gbarea $ truncate -s 2G 2gbarea Format the file as an ext4 filesystem: $ mke2fs -t ext4 -F 2gbarea This command will create an ext4 filesystem within the 2gbarea file. Mount the filesystem: $ sudo mount 2gbarea up This will mount the 2gbarea filesystem on the up directory. Verify the mount: $ df -h up This command will display the size, usage, and available space of the up directory. ","date":"09-09-2020","objectID":"/posts/devops/docker-limit-quota-folder/:2:0","tags":["docker"],"title":"Docker Limit Quota Folder","uri":"/posts/devops/docker-limit-quota-folder/#creating-a-limited-quota-folder"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Automating the Mount Process To automate the mount process and ensure it persists across system reboots, you can modify the /etc/fstab file. Here\u0026rsquo;s how: Open the /etc/fstab file in a text editor with root privileges: $ sudo vim /etc/fstab Add an entry for the mounted filesystem at the end of the file: 2gbarea up auto nosuid,nodev,nofail,x-gvfs-show 0 0Alternatively, you can use the UUID (Universal Unique Identifier) of the filesystem: UUID=bf1b2ee8-a7df-4a57-9d05-a8b60323e2bf /up auto nosuid,nodev,nofail,x-gvfs-show 0 0Replace bf1b2ee8-a7df-4a57-9d05-a8b60323e2bf with the actual UUID obtained from the sudo blkid command. Save and close the file. Now, whenever the system boots, the 2gbarea filesystem will be automatically mounted on the up directory with the specified options. ","date":"09-09-2020","objectID":"/posts/devops/docker-limit-quota-folder/:3:0","tags":["docker"],"title":"Docker Limit Quota Folder","uri":"/posts/devops/docker-limit-quota-folder/#automating-the-mount-process"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Conclusion In this article, we have learned how to create and mount a limited quota folder using Docker. By following these steps, you can effectively restrict the disk space usage of a specific folder within a Docker container. ","date":"09-09-2020","objectID":"/posts/devops/docker-limit-quota-folder/:4:0","tags":["docker"],"title":"Docker Limit Quota Folder","uri":"/posts/devops/docker-limit-quota-folder/#conclusion"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re facing difficulties with the visibility of the Buttonizer button on your WordPress site while using the w2static plugin, and you suspect that Buttonizer\u0026rsquo;s Ajax functionality might be causing the issue, you can attempt to resolve it by disabling the Ajax feature. Below are steps to guide you through the process: ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:0:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#"},{"categories":["Software"],"collections":null,"content":"1. Access Your WordPress Dashboard Log in to your WordPress admin dashboard using your credentials. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:1:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#1-access-your-wordpress-dashboard"},{"categories":["Software"],"collections":null,"content":"2. Navigate to Buttonizer Settings In the left-hand menu, locate and click on \u0026ldquo;Buttonizer\u0026rdquo; to access the Buttonizer settings page. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:2:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#2-navigate-to-buttonizer-settings"},{"categories":["Software"],"collections":null,"content":"3. Find Ajax Settings Look for the settings associated with Ajax functionality. This could be titled \u0026ldquo;Ajax,\u0026rdquo; \u0026ldquo;Buttonizer Ajax,\u0026rdquo; or something similar. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:3:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#3-find-ajax-settings"},{"categories":["Software"],"collections":null,"content":"4. Disable Ajax Within the Ajax settings section, you should find an option to enable or disable the Ajax functionality for Buttonizer. Toggle this option to disable Ajax. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:4:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#4-disable-ajax"},{"categories":["Software"],"collections":null,"content":"5. Save Changes Once you\u0026rsquo;ve disabled Ajax, ensure that you save the changes you\u0026rsquo;ve made on the Buttonizer settings page. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:5:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#5-save-changes"},{"categories":["Software"],"collections":null,"content":"6. Clear Cache (If Applicable) If your site employs caching plugins or services, clear the cache to ensure that the changes are implemented effectively. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:6:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#6-clear-cache-if-applicable"},{"categories":["Software"],"collections":null,"content":"7. Check Button Visibility After disabling the Ajax feature, visit your WordPress site and inspect whether the Buttonizer button is now visible. If the button is now displayed, it suggests that the Ajax functionality was causing the issue. It\u0026rsquo;s important to note that these steps are general guidelines, and the exact labels and locations of settings may vary depending on the specific versions of the Buttonizer and w2static plugins you are using. If you\u0026rsquo;re unable to locate the Ajax settings or if the aforementioned steps don\u0026rsquo;t resolve the issue, consider seeking assistance from the support forums for both plugins or referring to their documentation for more tailored troubleshooting steps. Additionally, be sure to monitor your browser\u0026rsquo;s console for any error messages that could offer further insights into the problem. ","date":"04-09-2020","objectID":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/:7:0","tags":["wordpress"],"title":"Troubleshooting Making Buttonizer Visible in WordPress when Using w2static Plugin","uri":"/posts/software/troubleshooting-making-buttonizer-visible-in-wordpress-when-using-w2static-plugin/#7-check-button-visibility"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re encountering difficulties scrolling through content in your Bash terminal while using the more or less commands, there are several troubleshooting steps you can take to address the issue. These commands are designed to display the contents of files or command outputs one screen at a time. If scrolling isn\u0026rsquo;t functioning as expected, consider the following solutions: ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:0:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#"},{"categories":["Software"],"collections":null,"content":"1. Utilize Keyboard Shortcuts When using the more or less commands, you can employ the following keyboard shortcuts to navigate through the displayed content: Spacebar: Move forward one page. Enter: Move forward one line. B: Move backward one page. Q: Quit and exit the display. /your_search_term: Search for a specific term within the content (replace your_search_term with the actual term). ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:1:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#1-utilize-keyboard-shortcuts"},{"categories":["Software"],"collections":null,"content":"2. Verify Arrow Key Functionality Use the arrow keys (up and down) to scroll through the content. If the arrow keys are not responsive, ensure that your terminal settings are correctly configured. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:2:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#2-verify-arrow-key-functionality"},{"categories":["Software"],"collections":null,"content":"3. Examine Terminal Emulator Settings If you\u0026rsquo;re utilizing a specific terminal emulator (such as Terminal on macOS, GNOME Terminal on Linux, or PuTTY on Windows), confirm that the emulator settings are not causing issues with keyboard inputs. You may need to adjust keyboard settings within the terminal emulator preferences. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:3:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#3-examine-terminal-emulator-settings"},{"categories":["Software"],"collections":null,"content":"4. Check for Interactive Mode Occasionally, scrolling may not work as anticipated when the output is not in \u0026ldquo;interactive mode\u0026rdquo; (for instance, when output is redirected to a file or another command). If you\u0026rsquo;re using a pipe (|) to send output to more or less, attempt running the command without the pipe to determine if the problem persists. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:4:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#4-check-for-interactive-mode"},{"categories":["Software"],"collections":null,"content":"5. Ensure Terminal Window Size Ensure that your terminal window is sufficiently large to accommodate the displayed content. If the content is shorter than the terminal window, scrolling may not be necessary. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:5:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#5-ensure-terminal-window-size"},{"categories":["Software"],"collections":null,"content":"6. Inspect for Errors If you\u0026rsquo;re attempting to view the output of a command and encounter errors or issues with the command itself, it could impact the behavior of more or less. Confirm that the command is functioning as intended. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:6:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#6-inspect-for-errors"},{"categories":["Software"],"collections":null,"content":"7. Check for Control Characters In certain cases, control characters or special formatting within a file might affect scrolling behavior. If the content appears distorted or fails to respond to scrolling, it\u0026rsquo;s possible that the file contains non-printable characters. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:7:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#7-check-for-control-characters"},{"categories":["Software"],"collections":null,"content":"8. Reset the Terminal If none of the aforementioned solutions resolve the problem, consider restarting your terminal emulator or resetting your terminal session to see if that resolves the issue. Remember that both more and less are widely used terminal commands and should function correctly in most scenarios. If you continue to encounter problems after attempting these steps, there could be specific issues with your terminal configuration or system setup that require further investigation. ","date":"27-08-2020","objectID":"/posts/software/bash-scroll-wont-work/:8:0","tags":["linux","mac","bash"],"title":"Bash Scroll Won't Work","uri":"/posts/software/bash-scroll-wont-work/#8-reset-the-terminal"},{"categories":["Software"],"collections":null,"content":"When using WordPress in conjunction with Google Site Kit, it\u0026rsquo;s important to understand the role permissions that are available for various user roles. Role permissions determine what actions users with different roles can perform within the WordPress dashboard. ","date":"08-08-2020","objectID":"/posts/software/wordpress-google-site-kit-view-role-permission/:0:0","tags":["wordpress"],"title":"WordPress Google Site Kit View Role Permission","uri":"/posts/software/wordpress-google-site-kit-view-role-permission/#"},{"categories":["Software"],"collections":null,"content":"Enabling Permissions In WordPress, permissions are typically managed through the use of capabilities. Capabilities are specific actions that users can perform. By assigning different capabilities to different roles, you can control what users are allowed to do. For the purpose of integrating Google Site Kit with WordPress, there are specific capabilities that are relevant. The capabilities you mentioned, manage_options and edit_others_posts, play a role in granting certain permissions related to Google Site Kit functionality. ","date":"08-08-2020","objectID":"/posts/software/wordpress-google-site-kit-view-role-permission/:1:0","tags":["wordpress"],"title":"WordPress Google Site Kit View Role Permission","uri":"/posts/software/wordpress-google-site-kit-view-role-permission/#enabling-permissions"},{"categories":["Software"],"collections":null,"content":"manage_options Capability The manage_options capability is a powerful capability in WordPress. Users with this capability have access to the Settings menu in the WordPress dashboard, including the settings related to Google Site Kit. This capability is often associated with administrators because it allows users to make changes to critical site settings. With the manage_options capability, users can configure Google Site Kit settings, connect their site to Google services, and manage various integrations. However, this capability goes beyond just Google Site Kit settings; it grants access to a range of important settings across the entire WordPress installation. ","date":"08-08-2020","objectID":"/posts/software/wordpress-google-site-kit-view-role-permission/:2:0","tags":["wordpress"],"title":"WordPress Google Site Kit View Role Permission","uri":"/posts/software/wordpress-google-site-kit-view-role-permission/#manage_options-capability"},{"categories":["Software"],"collections":null,"content":"edit_others_posts Capability The edit_others_posts capability is related to content management. Users with this capability can edit posts created by other users on the site. However, it\u0026rsquo;s important to note that this capability is not directly tied to Google Site Kit settings or functionality. Instead, it pertains to content editing and management within WordPress. Users with the edit_others_posts capability can edit posts created by any user, which is typically a permission assigned to editors and administrators. This capability doesn\u0026rsquo;t have a direct impact on Google Site Kit settings but is relevant for overall content management within WordPress. ","date":"08-08-2020","objectID":"/posts/software/wordpress-google-site-kit-view-role-permission/:3:0","tags":["wordpress"],"title":"WordPress Google Site Kit View Role Permission","uri":"/posts/software/wordpress-google-site-kit-view-role-permission/#edit_others_posts-capability"},{"categories":["Software"],"collections":null,"content":"Role Assignment When configuring role permissions for Google Site Kit, you should consider which roles need access to specific capabilities. For instance: Administrators: Administrators should have the manage_options capability to control Google Site Kit settings and other critical site configurations. Editors: Editors, who can edit content, might not need the manage_options capability. Instead, they can be assigned the edit_others_posts capability to manage content effectively. Authors and Contributors: Authors and contributors typically focus on content creation and might not need access to Google Site Kit settings. It\u0026rsquo;s crucial to carefully assign role permissions based on the responsibilities of each user role and the level of access they require to Google Site Kit and other WordPress settings. In conclusion, understanding the manage_options and edit_others_posts capabilities in the context of WordPress Google Site Kit integration helps you make informed decisions about role permissions. By configuring these capabilities appropriately for different user roles, you can ensure that users have the right level of access to manage both content and site settings effectively. ","date":"08-08-2020","objectID":"/posts/software/wordpress-google-site-kit-view-role-permission/:4:0","tags":["wordpress"],"title":"WordPress Google Site Kit View Role Permission","uri":"/posts/software/wordpress-google-site-kit-view-role-permission/#role-assignment"},{"categories":["Software"],"collections":null,"content":"WordPress is a popular platform for building websites, and its extensibility is enhanced by various plugins. However, sometimes issues may arise, such as missing fonts, which can negatively impact the user experience. In this article, we will guide you through the steps to fix the \u0026ldquo;Elements Kit Font Missing\u0026rdquo; error when using the WP2Static WordPress plugin. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:0:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Identify the Problem The error message suggests that fonts are missing from the Elements Kit Lite plugin, specifically two WOFF files. These fonts are essential for displaying the correct typography on your website. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:1:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#step-1-identify-the-problem"},{"categories":["Software"],"collections":null,"content":"Step 2: Ensure the Elements Kit Lite Plugin is Up to Date Before proceeding with any troubleshooting, make sure you have the latest version of the Elements Kit Lite plugin installed. Outdated versions may contain bugs or issues that can lead to font-related errors. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:2:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#step-2-ensure-the-elements-kit-lite-plugin-is-up-to-date"},{"categories":["Software"],"collections":null,"content":"Step 3: Verify the Font URLs Check if the two font URLs mentioned in the error message are accessible. To do this, open your web browser and try accessing the following URLs: /wp-content/plugins/elementskit-lite/modules/controls/assets/fonts/elementskit.woff /wp-content/plugins/elementskit-lite/widgets/init/assets/fonts/elementskit.woff If the URLs return a 404 error or any other issue, it confirms that the fonts are indeed missing or inaccessible. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:3:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#step-3-verify-the-font-urls"},{"categories":["Software"],"collections":null,"content":"Step 4: Replace the Missing Fonts To resolve the issue, you can try one of the following approaches: ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:4:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#step-4-replace-the-missing-fonts"},{"categories":["Software"],"collections":null,"content":"Approach A: Reinstall the Elements Kit Lite Plugin Go to your WordPress admin dashboard. Navigate to \u0026ldquo;Plugins\u0026rdquo; and find \u0026ldquo;Elements Kit Lite.\u0026rdquo; Deactivate the plugin and then reactivate it. Check if the fonts are now accessible, and the error is resolved. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:4:1","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#approach-a-reinstall-the-elements-kit-lite-plugin"},{"categories":["Software"],"collections":null,"content":"Approach B: Manual Font Replacement Download the Elements Kit Lite plugin from the official WordPress repository or the developer\u0026rsquo;s website. Extract the downloaded file on your computer. Locate the missing font files: \u0026ldquo;elementskit.woff\u0026rdquo; from the specified directories: /wp-content/plugins/elementskit-lite/modules/controls/assets/fonts/ /wp-content/plugins/elementskit-lite/widgets/init/assets/fonts/ Connect to your WordPress site using an FTP client or the File Manager in your hosting control panel. Upload the missing font files to their respective directories on your server. Ensure the permissions of the uploaded font files are set correctly (usually 644). Refresh your website and check if the fonts are now loading correctly. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:4:2","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#approach-b-manual-font-replacement"},{"categories":["Software"],"collections":null,"content":"Step 5: Clear Cache and Refresh Sometimes, font-related errors can be caused by caching issues. Clear any caching plugins you have installed and refresh your website to see if the problem persists. ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:5:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#step-5-clear-cache-and-refresh"},{"categories":["Software"],"collections":null,"content":"Conclusion By following the steps above, you should be able to resolve the \u0026ldquo;Elements Kit Font Missing\u0026rdquo; error on the WP2Static WordPress plugin. Remember to keep your plugins and WordPress installation up to date to avoid potential issues in the future. If the problem persists or you encounter any other errors, don\u0026rsquo;t hesitate to seek help from the plugin\u0026rsquo;s support team or the WordPress community. Happy website building! ","date":"01-08-2020","objectID":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/:6:0","tags":["wordpress"],"title":"Fixing WordPress Font Error Elements Kit Font Missing on WP2Static WordPress Plugin","uri":"/posts/software/fixing-wordpress-font-error-elements-kit-font-missing-on-wp2static-wordpress-plugin/#conclusion"},{"categories":["Development"],"collections":null,"content":"macOS keeps crash logs to help diagnose and troubleshoot issues with applications and system services. These logs can provide valuable information when you\u0026rsquo;re experiencing problems or when an application unexpectedly quits. Here\u0026rsquo;s how to view crash logs on macOS: ","date":"18-07-2020","objectID":"/posts/development/viewing-crash-logs-on-macos/:0:0","tags":null,"title":"Viewing Crash Logs on macOS","uri":"/posts/development/viewing-crash-logs-on-macos/#"},{"categories":["Development"],"collections":null,"content":"Using the Finder: Open Finder: Click on the Finder icon in the Dock or press Command + Space, then type \u0026ldquo;Finder\u0026rdquo; and hit Return. Go to Folder: Click on \u0026ldquo;Go\u0026rdquo; in the menu bar at the top of the screen and select \u0026ldquo;Go to Folder\u0026hellip;\u0026rdquo; or press Shift + Command + G. Access the Crash Logs Directory: In the \u0026ldquo;Go to the folder\u0026rdquo; dialog that appears, enter ~/Library/Logs/DiagnosticReports/ and click \u0026ldquo;Go.\u0026rdquo; This will take you to the directory where macOS stores crash logs. View Crash Logs: In this folder, you\u0026rsquo;ll find crash log files with names like ApplicationName_date_time.crash. Double-click on a file to open it in the Console app, where you can view detailed information about the crash. ","date":"18-07-2020","objectID":"/posts/development/viewing-crash-logs-on-macos/:0:1","tags":null,"title":"Viewing Crash Logs on macOS","uri":"/posts/development/viewing-crash-logs-on-macos/#using-the-finder"},{"categories":["Development"],"collections":null,"content":"Terminal Commands: You can also use Terminal to navigate to the crash log directory and view logs. Here are some useful commands: Viewing Crash Log Files: Open Terminal and use the cd (change directory) command to navigate to the crash log directory: cd ~/Library/Logs/DiagnosticReports/ Once you\u0026rsquo;re in the directory, you can use commands like ls to list the crash log files, and cat or less to view their contents: ls cat ApplicationName_date_time.crash ","date":"18-07-2020","objectID":"/posts/development/viewing-crash-logs-on-macos/:0:2","tags":null,"title":"Viewing Crash Logs on macOS","uri":"/posts/development/viewing-crash-logs-on-macos/#terminal-commands"},{"categories":["Development"],"collections":null,"content":"macOS Services and Launch Daemons/Agents: macOS also uses services and launch daemons/agents to manage background processes. Here are the typical locations for these: System-wide Daemons (provided by macOS): /System/Library/LaunchDaemons/ Per-User Agents (provided by macOS): /System/Library/LaunchAgents/ (system-wide) ~/Library/LaunchAgents/ (per-user) Per-User Agents (provided by the Administrator): /Library/LaunchAgents/ (system-wide) /Library/LaunchDaemons/ (system-wide) If you need to disable or unload a service or agent, you can use the launchctl command with sudo. For example: sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.mDNSResponderHelper.plist These commands will stop and unload the specified daemons. Be cautious when modifying system services, as it can impact the stability and functionality of your macOS system. Remember to replace com.apple.mDNSResponder.plist and com.apple.mDNSResponderHelper.plist with the actual names of the daemons you want to unload. ","date":"18-07-2020","objectID":"/posts/development/viewing-crash-logs-on-macos/:0:3","tags":null,"title":"Viewing Crash Logs on macOS","uri":"/posts/development/viewing-crash-logs-on-macos/#macos-services-and-launch-daemonsagents"},{"categories":["Development"],"collections":null,"content":"The Hosts file on a Mac is a plain text file that maps hostnames to IP addresses. It can be useful for various purposes, such as blocking websites or redirecting domain names. If you\u0026rsquo;ve made changes to your Hosts file and they\u0026rsquo;re not taking effect, you may need to refresh the file or clear the DNS cache. Here\u0026rsquo;s how you can do it using the Terminal: Open Terminal: You can find Terminal in the Utilities folder within the Applications folder, or you can quickly access it using Spotlight (Cmd + Space, then type \u0026ldquo;Terminal\u0026rdquo;). Edit the Hosts File (if necessary): If you need to make changes to the Hosts file, you can do so with a text editor like nano or vim. For example: sudo nano /etc/hosts This command will open the Hosts file in the nano text editor with superuser privileges. You\u0026rsquo;ll need to enter your admin password. Make Your Changes: Add, edit, or remove entries in the Hosts file as needed. Each entry should be in the format IP_Address Hostname. For example: 127.0.0.1 localhost Save the File: In nano, press Ctrl + O to save the file, then press Enter. To exit nano, press Ctrl + X. Clear the DNS Cache: To ensure that your changes take effect immediately, you can flush the DNS cache by running the following command: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder This command flushes the DNS cache and restarts the mDNSResponder process. Verify Your Changes: You can verify that your Hosts file changes have taken effect by opening a web browser and entering the hostname you modified. It should now resolve to the IP address you specified in the Hosts file. Keep in mind that modifying the Hosts file can have important implications for your system\u0026rsquo;s network behavior, so use it with caution. Make sure you know what you\u0026rsquo;re doing and back up the original Hosts file before making changes. That\u0026rsquo;s it! You\u0026rsquo;ve successfully refreshed the Hosts file on your Mac and cleared the DNS cache to ensure that your changes are applied. ","date":"11-07-2020","objectID":"/posts/development/how-to-refresh-the-hosts-file-on-a-mac/:0:0","tags":null,"title":"How to Refresh the Hosts File on a Mac","uri":"/posts/development/how-to-refresh-the-hosts-file-on-a-mac/#"},{"categories":["Development"],"collections":null,"content":"In Bash, you can use the \u0026quot;$@\u0026quot; special variable to forward all the parameters passed to a script or function. This allows you to pass all the arguments received by your script or function to another command. Here\u0026rsquo;s how you can use \u0026quot;$@\u0026quot; in a Bash script or function: #!/bin/bash # Define a function that forwards all parameters to another command forward_parameters() { # Call the desired command with all the parameters passed to this function some_command \u0026#34;$@\u0026#34; } # Call the function and pass all the script\u0026#39;s arguments to it forward_parameters \u0026#34;$@\u0026#34; In this example: We define a Bash function named forward_parameters. Inside the function, we use \u0026quot;$@\u0026quot; to forward all the parameters passed to the function to the some_command. You can replace some_command with the actual command you want to execute with the forwarded parameters. Outside the function, we call forward_parameters and pass \u0026quot;$@\u0026quot; as its arguments. This ensures that all the arguments passed to the script are forwarded to the some_command. Now, when you run your script with arguments, like this: ./myscript.sh arg1 arg2 arg3 All the arguments (arg1, arg2, arg3) will be forwarded to the some_command within the forward_parameters function. This is a useful technique for building wrapper scripts or functions that modify or extend the behavior of other commands while passing through all the necessary arguments. ","date":"10-07-2020","objectID":"/posts/development/forward-all-parameters-on-bash/:0:0","tags":null,"title":"Forward All Parameters on Bash","uri":"/posts/development/forward-all-parameters-on-bash/#"},{"categories":["Development"],"collections":null,"content":"WordPress CLI, or wp-cli, is a powerful command-line tool that allows you to manage your WordPress websites directly from the terminal. It\u0026rsquo;s particularly useful for tasks like plugin installation, theme management, and database maintenance. In this guide, we\u0026rsquo;ll walk you through the installation and basic usage of wp-cli on a Linux system. ","date":"10-07-2020","objectID":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/:0:0","tags":null,"title":"How to Install and Use WordPress CLI (wp-cli)","uri":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have the following prerequisites in place: Linux System: This guide assumes you are using a Linux-based operating system. Terminal Access: You should have access to a terminal or command-line interface. Superuser Privileges: You need superuser or sudo privileges to install software globally on your system. ","date":"10-07-2020","objectID":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/:1:0","tags":null,"title":"How to Install and Use WordPress CLI (wp-cli)","uri":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Installation Follow these steps to install WordPress CLI: Open your terminal. Update the package list to ensure you have the latest information about available packages: sudo apt update Install the less utility. While it\u0026rsquo;s not required for WordPress CLI, it\u0026rsquo;s useful for viewing long text files and is often a recommended addition: sudo apt install less Download the WordPress CLI executable (wp-cli.phar) from the official GitHub repository using curl: curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar Make the wp-cli.phar file executable: chmod +x wp-cli.phar Move the wp-cli.phar file to a directory in your system\u0026rsquo;s PATH to make it globally accessible: sudo mv wp-cli.phar /usr/local/bin/wp Create a configuration directory for WordPress CLI and assign ownership to the www-data user (the web server user): sudo mkdir /var/www/.wp-cli sudo chown -R www-data:www-data /var/www/.wp-cli ","date":"10-07-2020","objectID":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/:2:0","tags":null,"title":"How to Install and Use WordPress CLI (wp-cli)","uri":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/#installation"},{"categories":["Development"],"collections":null,"content":"Basic Usage Now that WordPress CLI is installed, you can use it to manage your WordPress site from the command line. Here are some common commands: Navigate to Your WordPress Root Directory: Before running any WordPress CLI commands, navigate to the root directory of your WordPress installation: cd /path/to/your/wordpress/directory List Installed Plugins: To list all the plugins currently installed on your WordPress site, use the following command: wp plugin list This will display a list of active and inactive plugins, along with their status and version information. That\u0026rsquo;s it! You\u0026rsquo;ve successfully installed WordPress CLI and used it to list your installed plugins. You can explore more commands and features by checking out the official documentation. WordPress CLI can save you time and simplify various WordPress management tasks, making it a valuable tool for website administrators and developers. ","date":"10-07-2020","objectID":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/:3:0","tags":null,"title":"How to Install and Use WordPress CLI (wp-cli)","uri":"/posts/development/how-to-install-and-use-wordpress-cli-wp-cli/#basic-usage"},{"categories":["Development"],"collections":null,"content":"It looks like you\u0026rsquo;re trying to read the contents of a .env file using a bash script. The provided script uses the source command to load the variables from the .env file into the current shell environment. Additionally, it uses set -o allexport to automatically export all subsequently defined variables to the environment. Here\u0026rsquo;s a breakdown of what each part of the script does: set -o allexport: This command enables the allexport option, which means that any variable defined after this point will be automatically exported to the environment. In this case, it\u0026rsquo;s used to ensure that the variables read from the .env file will be available to the rest of the script and any subsequent commands. source .env: This command reads and processes the contents of the .env file in the current shell context. The source command is used to execute the commands in the file as if they were typed directly into the shell. set +o allexport: This command disables the allexport option. This is done to prevent any new variables defined in the script after sourcing the .env file from being automatically exported. It\u0026rsquo;s a good practice to limit the scope of exported variables to only those read from the .env file. By using this script, you can load environment variables from the .env file into your shell session. This is a common practice in development environments to manage configuration settings without hardcoding them in scripts or code. Remember to make sure that the .env file is present in the same directory as the script or provide the appropriate path to the file if it\u0026rsquo;s located elsewhere. Keep in mind that this script is specific to the bash shell. If you\u0026rsquo;re using a different shell, the syntax and behavior might vary. ","date":"10-07-2020","objectID":"/posts/development/read-env-file-using-bash/:0:0","tags":null,"title":"Read Env File Using Bash","uri":"/posts/development/read-env-file-using-bash/#"},{"categories":["Development"],"collections":null,"content":"In Bash, you can use the sed command to find and replace (substitute) strings within a file. This is a powerful text manipulation tool that allows you to make changes to a file\u0026rsquo;s content. Below, we\u0026rsquo;ll go over various examples of using sed for find and replace operations. ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:0:0","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#"},{"categories":["Development"],"collections":null,"content":"General Syntax The basic syntax for using sed to find and replace is as follows: sed -i \u0026#39;s/word1/word2/g\u0026#39; input_file -i: This option tells sed to edit the file in-place, meaning the changes will be made directly to the file, and the original file will be overwritten. 's/word1/word2/g': This is the substitution command. It tells sed to find all occurrences of word1 and replace them with word2. The g at the end stands for \u0026ldquo;global,\u0026rdquo; which means replace all occurrences on each line. You can use a different delimiter instead of / to make the command more readable, such as + or _. ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:1:0","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#general-syntax"},{"categories":["Development"],"collections":null,"content":"Case-Insensitive Search If you want to perform a case-insensitive search and replace, you can add the I flag to the sed command: sed -i \u0026#39;s/word1/word2/gI\u0026#39; input_file ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:2:0","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#case-insensitive-search"},{"categories":["Development"],"collections":null,"content":"Note for MacOS Users On macOS, the default sed implementation does not support case-insensitive matching. You can install GNU sed using Homebrew as follows: brew install gnu-sed Then, you can use gsed instead of sed to perform case-insensitive replacements: gsed -i \u0026#39;s/word1/word2/gI\u0026#39; input_file ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:2:1","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#note-for-macos-users"},{"categories":["Development"],"collections":null,"content":"Delimiter Character If the delimiter character / is part of word1 or word2, it can lead to errors. In such cases, you can change the delimiter character to something else, like + or _. For example: sed -i \u0026#39;s+http://+https://www.example.biz+g\u0026#39; input.txt ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:3:0","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#delimiter-character"},{"categories":["Development"],"collections":null,"content":"Examples Without sed If you want to perform find and replace without using sed, you can use Bash parameter expansion: ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:4:0","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#examples-without-sed"},{"categories":["Development"],"collections":null,"content":"Replace the First Occurrence To replace only the first occurrence of a pattern with a given string, use ${parameter/pattern/string}: #!/bin/bash firstString=\u0026#34;I love Suzi and Marry\u0026#34; secondString=\u0026#34;Sara\u0026#34; echo \u0026#34;${firstString/Suzi/$secondString}\u0026#34; # prints \u0026#39;I love Sara and Marry\u0026#39; ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:4:1","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#replace-the-first-occurrence"},{"categories":["Development"],"collections":null,"content":"Replace All Occurrences To replace all occurrences of a pattern with a given string, use ${parameter//pattern/string}: message=\u0026#39;The secret code is 12345\u0026#39; echo \u0026#34;${message//[0-9]/X}\u0026#34; # prints \u0026#39;The secret code is XXXXX\u0026#39; These are some common ways to find and replace strings in Bash, depending on your specific needs and preferences. ","date":"09-07-2020","objectID":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/:4:2","tags":null,"title":"Bash Find and Replace (Substitute) String in a File","uri":"/posts/development/bash-find-and-replace-substitute-string-in-a-file/#replace-all-occurrences"},{"categories":["Development"],"collections":null,"content":"When managing a system, it\u0026rsquo;s essential to know which ports are open and in use. This information can be vital for security and troubleshooting purposes. Here are several methods to check for open and used ports on a system, depending on your operating system and preference. ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:0:0","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#"},{"categories":["Development"],"collections":null,"content":"Option 1: Using the lsof Command The lsof command (List Open Files) is a versatile tool for listing information about files and processes. It can also be used to identify open network ports. Below are examples of how to use lsof: # List all open ports and their associated processes $ sudo lsof -i -P -n # Filter for listening ports only $ sudo lsof -i -P -n | grep LISTEN # On OpenBSD, you can use `doas` instead of `sudo` $ doas lsof -i -P -n | grep LISTEN ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:1:0","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#option-1-using-the-lsof-command"},{"categories":["Development"],"collections":null,"content":"Option 2: Using the netstat Command The netstat command is a classic tool for displaying network-related information. However, note that it has been deprecated on some Linux distributions in favor of the ss command. Here are examples for both Linux and FreeBSD/MacOS X: ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:2:0","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#option-2-using-the-netstat-command"},{"categories":["Development"],"collections":null,"content":"Linux netstat Syntax: # List all listening ports and their associated processes $ netstat -tulpn | grep LISTEN # Alternatively, use the `ss` command $ sudo ss -tulw $ sudo ss -tulwn ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:2:1","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#linux-netstat-syntax"},{"categories":["Development"],"collections":null,"content":"FreeBSD/MacOS X netstat Syntax: # List all listening TCP ports $ netstat -anp tcp | grep LISTEN # List all listening UDP ports $ netstat -anp udp | grep LISTEN ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:2:2","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#freebsdmacos-x-netstat-syntax"},{"categories":["Development"],"collections":null,"content":"Option 3: Using the nmap Command The nmap command is a powerful network scanning tool that can be used to discover open ports and services on a remote system. Here are examples of how to use nmap for this purpose: # Scan for open TCP ports on localhost $ sudo nmap -sT -O localhost # Scan for open UDP ports on a specific IP address $ sudo nmap -sU -O 192.168.2.13 # Scan for both open TCP and UDP ports in a single command $ sudo nmap -sTU -O 192.168.2.13 These commands should help you determine which ports are open and actively in use on your system. Depending on your specific use case and operating system, you can choose the method that suits you best. ","date":"09-07-2020","objectID":"/posts/development/checking-for-open-and-used-ports/:3:0","tags":null,"title":"Checking for Open and Used Ports","uri":"/posts/development/checking-for-open-and-used-ports/#option-3-using-the-nmap-command"},{"categories":["Development"],"collections":null,"content":"HTTP Basic Authentication is a simple yet effective way to secure web pages or directories on your Apache web server. It requires users to enter a username and password to access protected content. In this guide, we\u0026rsquo;ll walk you through the steps to set up HTTP Basic Authentication using .htpasswd and .htaccess files on an Apache web server. ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:0:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure you have the following: A server running Apache. SSH or terminal access to your server. Basic knowledge of working with the command line. ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:1:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Create the .htpasswd File The .htpasswd file contains the usernames and their corresponding hashed passwords. You\u0026rsquo;ll use the htpasswd command to create this file. Replace \u0026ldquo;admin\u0026rdquo; with the username you want to create. htpasswd -c /var/www/html/.htpasswd admin You will be prompted to enter and confirm the password for the user. ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:2:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#step-1-create-the-htpasswd-file"},{"categories":["Development"],"collections":null,"content":"Step 2: Create the .htaccess File The .htaccess file is used to configure directory-specific settings, including authentication. Create or edit the .htaccess file in the directory you want to protect. You can use a text editor like Vim or Nano to do this. vim /var/www/html/.htaccess Add the following lines to your .htaccess file: AuthType Basic AuthName \u0026#34;Unauthorize\u0026#34; AuthUserFile /var/www/html/.htpasswd Require valid-user Save and exit the text editor. ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:3:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#step-2-create-the-htaccess-file"},{"categories":["Development"],"collections":null,"content":"Step 3: Restart Apache To apply the changes, you need to restart the Apache web server. The command to do this varies depending on your server\u0026rsquo;s operating system. On Ubuntu/Debian: sudo systemctl restart apache2 On CentOS/RHEL: sudo systemctl restart httpd ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:4:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#step-3-restart-apache"},{"categories":["Development"],"collections":null,"content":"Step 4: Test Authentication Visit the directory or webpage you\u0026rsquo;ve protected in your web browser. You should be prompted to enter a username and password. Enter the credentials you set in the .htpasswd file, and you should gain access to the protected content. Congratulations! You have successfully set up HTTP Basic Authentication using .htpasswd and .htaccess on your Apache web server. This simple yet effective method can help secure specific areas of your website that require restricted access. Remember to keep your .htpasswd file secure, as it contains sensitive information. Additionally, consider using HTTPS to encrypt the data transmitted during authentication for an added layer of security. That\u0026rsquo;s it! Your content is now protected by HTTP Basic Authentication. ","date":"09-07-2020","objectID":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/:5:0","tags":null,"title":"How to Create HTTP Basic Authentication with .htpasswd and .htaccess in Apache","uri":"/posts/development/how-to-create-http-basic-authentication-with-htpasswd-and-htaccess-in-apache/#step-4-test-authentication"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;ve accidentally deleted or encountered issues with your Apple Notes on your Mac, don\u0026rsquo;t worry; there\u0026rsquo;s a way to restore them. Apple Notes are not always stored in the obvious location, so follow these steps to recover your lost text or notes: Note: Before proceeding, make sure you have a Time Machine (TM) backup of your Mac, as this method relies on it. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:0","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#"},{"categories":["Development"],"collections":null,"content":"1. Quit the Notes App and Disconnect from the Internet First, close the Notes app if it\u0026rsquo;s open. Turn off your Wi-Fi or disconnect from the Internet to prevent your old notes from syncing during the restoration process. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:1","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#1-quit-the-notes-app-and-disconnect-from-the-internet"},{"categories":["Development"],"collections":null,"content":"2. Locate the Apple Notes Folder Your iCloud notes are not stored in the typical ~/Library/Containers/com.apple.Notes location. Instead, navigate to ~/Library/Group Containers/group.com.apple.notes. This is where your iCloud-synced notes are stored. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:2","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#2-locate-the-apple-notes-folder"},{"categories":["Development"],"collections":null,"content":"3. Create a Backup of the Notes Folder Copy the group.com.apple.notes folder to a safe location, like your Desktop. This step ensures that you have a backup of your current notes configuration. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:3","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#3-create-a-backup-of-the-notes-folder"},{"categories":["Development"],"collections":null,"content":"4. Restore the Notes Folder from Time Machine Backup Access your Time Machine backup system. Locate and restore the group.com.apple.notes folder to its original location, ~/Library/Group Containers/. This will revert your Notes app to a previous state where your lost notes should be available. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:4","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#4-restore-the-notes-folder-from-time-machine-backup"},{"categories":["Development"],"collections":null,"content":"5. Open the Notes App Launch the Notes app to access your restored notes. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:5","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#5-open-the-notes-app"},{"categories":["Development"],"collections":null,"content":"6. Copy and Paste the Lost Text To recover your lost text or notes, simply copy the content you need from the restored Notes app and paste it into another document, such as TextEdit. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:6","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#6-copy-and-paste-the-lost-text"},{"categories":["Development"],"collections":null,"content":"7. Quit the Notes App Again Close the Notes app once you\u0026rsquo;ve copied the desired text. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:7","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#7-quit-the-notes-app-again"},{"categories":["Development"],"collections":null,"content":"8. Delete the Restored Notes Folder Delete the group.com.apple.notes folder that you restored from your Desktop. This step is essential to ensure that you\u0026rsquo;re using the restored version of the Notes app. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:8","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#8-delete-the-restored-notes-folder"},{"categories":["Development"],"collections":null,"content":"9. Move the Original Notes Folder Back Move the group.com.apple.notes folder from your Desktop back to its original location, ~/Library/Group Containers/. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:9","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#9-move-the-original-notes-folder-back"},{"categories":["Development"],"collections":null,"content":"10. Reopen the Notes App Launch the Notes app once more. You should now have access to your restored notes. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:10","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#10-reopen-the-notes-app"},{"categories":["Development"],"collections":null,"content":"11. Reconnect to the Internet Finally, turn your Wi-Fi or internet connection back on to allow your notes to sync with iCloud. By following these steps, you should be able to restore your deleted or broken Apple Notes on your Mac, ensuring that your important information is once again accessible. ","date":"08-07-2020","objectID":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/:0:11","tags":null,"title":"How to Restore Deleted or Broken Apple Notes on Mac","uri":"/posts/development/how-to-restore-deleted-or-broken-apple-notes-on-mac/#11-reconnect-to-the-internet"},{"categories":["Development"],"collections":null,"content":"You want to reset the Launchpad on your macOS using a terminal command. To make it easier for you, here\u0026rsquo;s a breakdown of the command you provided: # Reset Launchpad on macOS To reset the Launchpad on macOS, you can use the following terminal command: ```shell defaults write com.apple.dock ResetLaunchPad -bool true; killall Dock defaults write: This part of the command is used to write a value to a property list (.plist) file. In this case, you are writing a value to the com.apple.dock property list. com.apple.dock: This specifies the property list file for the Dock, which includes settings for the Launchpad. ResetLaunchPad: This is the key (property) that you are modifying. Setting it to true will reset the Launchpad. -bool true: This sets the value for the ResetLaunchPad key to true, indicating that you want to reset the Launchpad. killall Dock: After modifying the property list, this command restarts the Dock, applying the changes you made. After running this command, your Launchpad will be reset to its default configuration, and any customizations you made will be removed. Make sure to save any important Launchpad layouts or organization before running this command. Remember that using terminal commands can have unintended consequences if used incorrectly, so proceed with caution. ","date":"08-07-2020","objectID":"/posts/development/reset-launchpad-mac-os/:0:0","tags":null,"title":"Reset LaunchPad Mac Os","uri":"/posts/development/reset-launchpad-mac-os/#"},{"categories":["Software"],"collections":null,"content":"If your keyboard suddenly stops working on your Ativ Smart PC Pro, but the connection light indicator is still turned on, it can be frustrating. However, there are several steps you can take to troubleshoot and potentially resolve the issue. Follow the steps below: ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:0:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#"},{"categories":["Software"],"collections":null,"content":"Step 1: Check Physical Connections Before diving into software-related solutions, it\u0026rsquo;s essential to ensure that there are no loose or damaged physical connections causing the problem. Follow these steps: Detach and Reattach the Keyboard: If your device allows you to detach the keyboard, do so and then reattach it firmly. Check the Cable: If your keyboard connects through a cable, examine it for any signs of damage or fraying. If you find any issues, consider replacing the cable. Try a Different USB Port (if applicable): If you are using an external keyboard that connects via USB, try connecting it to a different USB port on your device. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:1:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-1-check-physical-connections"},{"categories":["Software"],"collections":null,"content":"Step 2: Restart Your Device Sometimes, a simple restart can resolve temporary glitches or conflicts. Reboot your Ativ Smart PC Pro and check if the keyboard starts working again. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:2:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-2-restart-your-device"},{"categories":["Software"],"collections":null,"content":"Step 3: Check Device Manager If the keyboard issue persists, there might be a problem with the driver or software settings. Follow these steps to check Device Manager: Open Device Manager: Right-click on the \u0026ldquo;Start\u0026rdquo; button (Windows logo) and select \u0026ldquo;Device Manager\u0026rdquo; from the context menu. Expand the \u0026ldquo;Keyboards\u0026rdquo; Section: In Device Manager, locate and expand the \u0026ldquo;Keyboards\u0026rdquo; category. Check for Issues: Look for any yellow exclamation marks or error symbols next to your keyboard\u0026rsquo;s name. This indicates a problem with the keyboard driver. Update or Roll Back Driver: Right-click on the keyboard name, and select \u0026ldquo;Update driver.\u0026rdquo; Follow the on-screen instructions to search for and install any available updates. If an update doesn\u0026rsquo;t fix the issue, you can also try rolling back the driver to a previous version. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:3:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-3-check-device-manager"},{"categories":["Software"],"collections":null,"content":"Step 4: Scan for Hardware Changes If updating or rolling back the driver didn\u0026rsquo;t help, you can try scanning for hardware changes. This action can prompt your computer to re-detect connected devices, potentially resolving the keyboard issue. Here\u0026rsquo;s how: Open Device Manager: Right-click on the \u0026ldquo;Start\u0026rdquo; button (Windows logo) and select \u0026ldquo;Device Manager\u0026rdquo; from the context menu. Scan for Hardware Changes: In Device Manager, click on the \u0026ldquo;Action\u0026rdquo; menu located at the top of the window. From the drop-down menu, select \u0026ldquo;Scan for hardware changes.\u0026rdquo; Wait for the Process to Complete: The scan will begin, and your computer will search for any newly connected devices or detect any existing ones that weren\u0026rsquo;t recognized before. Check the Keyboard: Once the scan is complete, check if your keyboard starts working again. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:4:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-4-scan-for-hardware-changes"},{"categories":["Software"],"collections":null,"content":"Step 5: Test the Keyboard on Another Device If the keyboard still doesn\u0026rsquo;t work on your Ativ Smart PC Pro, it\u0026rsquo;s possible that the issue lies with the keyboard itself. To verify this, connect the keyboard to another compatible device (such as another computer or laptop) and see if it functions correctly. If it does, the problem likely lies with your Ativ Smart PC Pro. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:5:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-5-test-the-keyboard-on-another-device"},{"categories":["Software"],"collections":null,"content":"Step 6: Check for System Updates Ensure that your operating system and drivers are up-to-date. Outdated software can sometimes lead to compatibility issues, including problems with peripherals like keyboards. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:6:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-6-check-for-system-updates"},{"categories":["Software"],"collections":null,"content":"Step 7: Consider a System Restore (Windows) If the issue started recently and you suspect that a recent change might have caused it, you can try performing a system restore to revert your system settings to a previous state. This step will only work if you have previously created a system restore point. To perform a system restore: Type \u0026ldquo;System Restore\u0026rdquo; into the Windows search bar and open the corresponding settings. Follow the on-screen instructions to choose a restore point from a date when the keyboard was working correctly. Confirm the restore point and allow your system to reboot. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:7:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#step-7-consider-a-system-restore-windows"},{"categories":["Software"],"collections":null,"content":"Conclusion If your keyboard on the Ativ Smart PC Pro is suddenly not working, but the connection light indicator remains on, these troubleshooting steps should help you identify and potentially fix the issue. ","date":"02-07-2020","objectID":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/:8:0","tags":["windows"],"title":"Troubleshooting Keyboard Suddenly Not Working But Connection Light Indicator Still Turned On Ativ Smart PC Pro","uri":"/posts/software/troubleshooting-keyboard-suddenly-not-working-but-connection-light-indicator-still-turned-on-ativ-smart-pc-pro/#conclusion"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the steps to block traffic from specific countries using Cloudflare while allowing access to your uptime monitor IPs. We will cover how to achieve this both in Cloudflare\u0026rsquo;s Firewall Rules and on an Apache web server. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:0:0","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#"},{"categories":["Development"],"collections":null,"content":"Cloudflare Firewall Rules ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:0","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#cloudflare-firewall-rules"},{"categories":["Development"],"collections":null,"content":"Step 1: Log in to Cloudflare Go to Cloudflare and log in to your account. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:1","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-1-log-in-to-cloudflare"},{"categories":["Development"],"collections":null,"content":"Step 2: Access Firewall Rules In the Cloudflare dashboard, click on your domain. Navigate to the \u0026ldquo;Firewall\u0026rdquo; section in the top menu and then click on \u0026ldquo;Firewall Rules.\u0026rdquo; ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:2","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-2-access-firewall-rules"},{"categories":["Development"],"collections":null,"content":"Step 3: Create a Firewall Rule Click on the \u0026ldquo;Create a Firewall Rule\u0026rdquo; button. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:3","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-3-create-a-firewall-rule"},{"categories":["Development"],"collections":null,"content":"Step 4: Block Traffic by Country Give your rule a descriptive name, like \u0026ldquo;Block by Country.\u0026rdquo; Under \u0026ldquo;Then,\u0026rdquo; select \u0026ldquo;Block.\u0026rdquo; Under \u0026ldquo;If,\u0026rdquo; choose the condition \u0026ldquo;Country.\u0026rdquo; Choose \u0026ldquo;is in\u0026rdquo; and then select the countries you want to block. You can add multiple countries if needed. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:4","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-4-block-traffic-by-country"},{"categories":["Development"],"collections":null,"content":"Step 5: Add an Allow Rule for Uptime Monitor IPs To allow access to your uptime monitor IPs, create another firewall rule. Give this rule a name like \u0026ldquo;Allow Uptime Monitor.\u0026rdquo; Under \u0026ldquo;Then,\u0026rdquo; select \u0026ldquo;Allow.\u0026rdquo; Under \u0026ldquo;If,\u0026rdquo; choose the condition \u0026ldquo;IP Address.\u0026rdquo; Specify the IP addresses of your uptime monitor service. You may need to check with your uptime monitor provider for the list of IPs they use. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:5","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-5-add-an-allow-rule-for-uptime-monitor-ips"},{"categories":["Development"],"collections":null,"content":"Step 6: Order Your Rules Order your rules so that the \u0026ldquo;Allow Uptime Monitor\u0026rdquo; rule is higher in priority than the \u0026ldquo;Block by Country\u0026rdquo; rule. Rules are evaluated from top to bottom, so this ensures that the uptime monitor IPs are allowed before checking for country blocking. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:6","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-6-order-your-rules"},{"categories":["Development"],"collections":null,"content":"Step 7: Save and Deploy Click \u0026ldquo;Save and Deploy\u0026rdquo; to activate your Firewall Rules. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:1:7","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-7-save-and-deploy"},{"categories":["Development"],"collections":null,"content":"Apache Configuration If you\u0026rsquo;re using an Apache web server, you can also add an additional layer of protection. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:0","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#apache-configuration"},{"categories":["Development"],"collections":null,"content":"Step 1: Access Apache Configuration SSH into your server. Navigate to the Apache configuration directory. On many Linux distributions, it\u0026rsquo;s located at /etc/apache2/ or /etc/httpd/. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:1","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-1-access-apache-configuration"},{"categories":["Development"],"collections":null,"content":"Step 2: Edit the Apache Configuration File Open the Apache configuration file for your site, usually located in the /sites-available/ directory. Inside the \u0026lt;VirtualHost\u0026gt; section for your site, add the following lines to allow access to your uptime monitor IPs. Replace x.x.x.x with the actual IP addresses: \u0026lt;LocationMatch \u0026#34;/\u0026#34;\u0026gt; Require ip x.x.x.x x.x.x.x \u0026lt;/LocationMatch\u0026gt; ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:2","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-2-edit-the-apache-configuration-file"},{"categories":["Development"],"collections":null,"content":"Step 3: Block Traffic by Country To block traffic from specific countries, you can use the Apache mod_geoip module if it\u0026rsquo;s installed. If not, you can use mod_rewrite as an alternative. Using mod_geoip (if installed): GeoIPEnable On GeoIPDBFile /path/to/GeoIP.dat SetEnvIf GEOIP_COUNTRY_CODE2 CN BlockCountry SetEnvIf GEOIP_COUNTRY_CODE2 RU BlockCountry Order Deny,Allow Deny from env=BlockCountry Using mod_rewrite (alternative method): RewriteEngine On RewriteCond %{ENV:IP2LOCATION_COUNTRY_SHORT} ^(CN|RU)$ RewriteRule ^ - [F] ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:3","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-3-block-traffic-by-country"},{"categories":["Development"],"collections":null,"content":"Step 3: Block Traffic by Country To block traffic from specific countries, you can use the Apache mod_geoip module if it\u0026rsquo;s installed. If not, you can use mod_rewrite as an alternative. Using mod_geoip (if installed): GeoIPEnable On GeoIPDBFile /path/to/GeoIP.dat SetEnvIf GEOIP_COUNTRY_CODE2 CN BlockCountry SetEnvIf GEOIP_COUNTRY_CODE2 RU BlockCountry Order Deny,Allow Deny from env=BlockCountry Using mod_rewrite (alternative method): RewriteEngine On RewriteCond %{ENV:IP2LOCATION_COUNTRY_SHORT} ^(CN|RU)$ RewriteRule ^ - [F] ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:3","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#using-mod_geoip-if-installed"},{"categories":["Development"],"collections":null,"content":"Step 3: Block Traffic by Country To block traffic from specific countries, you can use the Apache mod_geoip module if it\u0026rsquo;s installed. If not, you can use mod_rewrite as an alternative. Using mod_geoip (if installed): GeoIPEnable On GeoIPDBFile /path/to/GeoIP.dat SetEnvIf GEOIP_COUNTRY_CODE2 CN BlockCountry SetEnvIf GEOIP_COUNTRY_CODE2 RU BlockCountry Order Deny,Allow Deny from env=BlockCountry Using mod_rewrite (alternative method): RewriteEngine On RewriteCond %{ENV:IP2LOCATION_COUNTRY_SHORT} ^(CN|RU)$ RewriteRule ^ - [F] ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:3","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#using-mod_rewrite-alternative-method"},{"categories":["Development"],"collections":null,"content":"Step 4: Save and Restart Apache Save the Apache configuration file and exit the editor. Restart Apache to apply the changes: sudo systemctl restart apache2 # On Ubuntu/Debian sudo systemctl restart httpd # On CentOS/RHEL These steps will block traffic from specified countries while allowing access to your uptime monitor IPs both at the Cloudflare and Apache levels, ensuring your website remains secure and available. ","date":"03-06-2020","objectID":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/:2:4","tags":null,"title":"How to Block IP By Country and Allow Uptime Monitor","uri":"/posts/development/how-to-block-ip-by-country-and-allow-uptime-monitor/#step-4-save-and-restart-apache"},{"categories":["Development"],"collections":null,"content":"To connect to a VPN via the shell on a Mac, you can use the networksetup command, as you\u0026rsquo;ve mentioned. However, please note that the commands you provided seem to be for setting up a PPPoE service, not for connecting to a VPN. If you want to connect to a VPN, you\u0026rsquo;ll need to use a different command. Below are the steps to connect to a VPN via the command line on macOS: List Available Network Services: You can start by listing the available network services on your Mac using the networksetup command with the -listallnetworkservices option. This will show you the names of the network services, including VPN services. networksetup -listallnetworkservices Look for the name of your VPN service in the list. Connect to the VPN: To connect to your VPN service, use the networksetup command with the -connectpppnoservice option followed by the name of your VPN service. Replace \u0026quot;EXAMPLE VPN\u0026quot; with the actual name of your VPN service. networksetup -connectpppnoservice \u0026#34;Your VPN Service Name\u0026#34; For example, if your VPN service is named \u0026ldquo;MyVPN,\u0026rdquo; the command would be: networksetup -connectpppnoservice \u0026#34;MyVPN\u0026#34; You may be prompted to enter your VPN username and password. Disconnect from the VPN: To disconnect from the VPN, you can use the networksetup command with the -disconnectpppnoservice option followed by the name of your VPN service. networksetup -disconnectpppnoservice \u0026#34;Your VPN Service Name\u0026#34; For example, if your VPN service is named \u0026ldquo;MyVPN,\u0026rdquo; the command would be: networksetup -disconnectpppnoservice \u0026#34;MyVPN\u0026#34; This will disconnect your Mac from the VPN. Make sure to replace \u0026quot;Your VPN Service Name\u0026quot; with the actual name of your VPN service as listed in step 1. Additionally, keep in mind that the exact command syntax and the VPN service names may vary depending on the VPN client and configuration you are using. ","date":"11-05-2020","objectID":"/posts/development/connect-vpn-via-shell-mac/:0:0","tags":null,"title":"Connect VPN via shell Mac","uri":"/posts/development/connect-vpn-via-shell-mac/#"},{"categories":["Productivity"],"collections":null,"content":"As we navigate the world of online communication, it\u0026rsquo;s essential to maintain a professional yet personal email address that separates our work and personal lives. In this article, we\u0026rsquo;ll explore the significance of having a business personal email address and discuss some best practices for choosing one. ","date":"29-04-2020","objectID":"/posts/productivity/choosing-perfect-email-address/:0:0","tags":null,"title":"Choosing Perfect Email Address","uri":"/posts/productivity/choosing-perfect-email-address/#"},{"categories":["Productivity"],"collections":null,"content":"Why Have a Business Personal Email Address? Having a dedicated email address for your business or profession is crucial for several reasons: Separation of Work and Personal Life: A separate email address helps you keep your work and personal life separate, which is essential for maintaining a healthy work-life balance. Professional Image: A professional email address reflects positively on your business or career, giving the impression that you\u0026rsquo;re organized and serious about your profession. Organization and Management: With multiple email addresses to manage, having a dedicated email address for your business or profession makes it easier to keep track of important communications. ","date":"29-04-2020","objectID":"/posts/productivity/choosing-perfect-email-address/:1:0","tags":null,"title":"Choosing Perfect Email Address","uri":"/posts/productivity/choosing-perfect-email-address/#why-have-a-business-personal-email-address"},{"categories":["Productivity"],"collections":null,"content":"Choosing the Right Business Personal Email Address When selecting an email address for your business or profession, consider the following tips: Use Your Name: If possible, use your name as the basis for your email address (e.g., johnsmiths@gmail.com). Add a Number or Modifier: If your name is common or already taken, add a number or modifier to make it unique (e.g., jblogs@gmail.com or jbconsulting@gmail.com). Avoid Unprofessional Names: Avoid using unprofessional names, such as \u0026ldquo;jumpingjoe\u0026rdquo;. Consider Using Your Domain Name: If you have your own domain name, consider using it for your email address (e.g., yourname@yourdomain.com). ","date":"29-04-2020","objectID":"/posts/productivity/choosing-perfect-email-address/:2:0","tags":null,"title":"Choosing Perfect Email Address","uri":"/posts/productivity/choosing-perfect-email-address/#choosing-the-right-business-personal-email-address"},{"categories":["Productivity"],"collections":null,"content":"Conclusion Having a business personal email address is crucial for maintaining a professional online presence. By choosing an email address that reflects your name or profession, you\u0026rsquo;ll be better equipped to manage your work-life balance and maintain a positive image in the digital world. ","date":"29-04-2020","objectID":"/posts/productivity/choosing-perfect-email-address/:3:0","tags":null,"title":"Choosing Perfect Email Address","uri":"/posts/productivity/choosing-perfect-email-address/#conclusion"},{"categories":["Development"],"collections":null,"content":"Over time, Xcode simulators can accumulate unused or outdated files, taking up valuable disk space on your machine. One effective way to free up space is by removing unavailable simulators using the xcrun simctl command-line tool. ","date":"23-04-2020","objectID":"/posts/development/clearing-old-simulator-files-in-xcode/:0:0","tags":null,"title":"Clearing Old Simulator Files in Xcode","uri":"/posts/development/clearing-old-simulator-files-in-xcode/#"},{"categories":["Development"],"collections":null,"content":"Using xcrun simctl to Delete Unavailable Simulators To remove unavailable simulators and their associated files, follow these steps: Open Terminal: Launch the Terminal app on your Mac. You can find it in the Applications \u0026gt; Utilities folder or by searching for \u0026ldquo;Terminal\u0026rdquo; using Spotlight. Access xcrun simctl: xcrun simctl is a command-line tool that lets you interact with iOS, watchOS, and tvOS simulators. Use it to manage simulators and their content. List Unavailable Simulators: Before deleting anything, it\u0026rsquo;s a good idea to list the unavailable simulators to confirm what will be removed. Enter the following command: xcrun simctl list devices | grep unavailableThis command will display a list of unavailable simulators along with their UDIDs. Delete Unavailable Simulators: To delete these unavailable simulators, you will need to use their UDIDs. Use the following command to delete each unavailable simulator: xcrun simctl delete \u0026lt;UDID\u0026gt;Replace \u0026lt;UDID\u0026gt; with the UDID of the simulator you want to delete. Repeat this command for each simulator you wish to remove. Confirm Deletion: Once you execute the delete command, Terminal will not provide a confirmation message. The simulator will be removed immediately. Verify Space Freed: After deleting the unavailable simulators, you can check your disk space to verify that the old simulator files have been successfully cleared. ","date":"23-04-2020","objectID":"/posts/development/clearing-old-simulator-files-in-xcode/:1:0","tags":null,"title":"Clearing Old Simulator Files in Xcode","uri":"/posts/development/clearing-old-simulator-files-in-xcode/#using-xcrun-simctl-to-delete-unavailable-simulators"},{"categories":["Development"],"collections":null,"content":"Conclusion Regularly cleaning up your Xcode simulators can help keep your development environment efficient and organized. By using the xcrun simctl command-line tool, you can easily identify and remove unavailable simulators, freeing up space on your machine. This can lead to improved performance and a smoother development experience. ","date":"23-04-2020","objectID":"/posts/development/clearing-old-simulator-files-in-xcode/:2:0","tags":null,"title":"Clearing Old Simulator Files in Xcode","uri":"/posts/development/clearing-old-simulator-files-in-xcode/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you are encountering issues while trying to install Ruby via rbenv and see errors related to \u0026ldquo;cctools,\u0026rdquo; this article will guide you through the troubleshooting process to fix the problem. The error message you might encounter is: /usr/local/Cellar/cctools/855/bin/ranlib: object: apps/libapps.a(app_rand.o) malformed object (unknown load command 1) ar: internal ranlib command failed make[1]: *** [apps/libapps.a] Error 1 make: *** [all] Error 2The \u0026ldquo;cctools\u0026rdquo; package could potentially conflict with the Ruby installation process, causing this error. To resolve the issue, we will walk through the steps to uninstall the \u0026ldquo;cctools\u0026rdquo; package and proceed with installing Ruby via rbenv. ","date":"09-04-2020","objectID":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/:0:0","tags":["ruby","mac"],"title":"Troubleshooting Unable to Install Ruby via rbenv Fixing Errors with cctools","uri":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Uninstall cctools using Homebrew First, we will uninstall the \u0026ldquo;cctools\u0026rdquo; package using Homebrew. Open your terminal and follow these steps: Launch the terminal application on your system. Enter the following command to uninstall cctools: brew uninstall cctools Homebrew will remove the cctools package from your system. ","date":"09-04-2020","objectID":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/:1:0","tags":["ruby","mac"],"title":"Troubleshooting Unable to Install Ruby via rbenv Fixing Errors with cctools","uri":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/#step-1-uninstall-cctools-using-homebrew"},{"categories":["Development"],"collections":null,"content":"Step 2: Install Ruby via rbenv Now that the conflicting \u0026ldquo;cctools\u0026rdquo; package has been removed, you can proceed with installing Ruby via rbenv: Open your terminal if it\u0026rsquo;s not already open. Install rbenv if you haven\u0026rsquo;t already. You can use Homebrew for this as well: brew install rbenv Once rbenv is installed, you\u0026rsquo;ll need to initialize it and add it to your shell configuration (e.g., .bashrc, .bash_profile, .zshrc). Run the following command: rbenv init Follow the instructions from the output to add the necessary configuration to your shell profile. Close and reopen your terminal to ensure the changes to your shell configuration take effect. Now, you can install the desired version of Ruby using rbenv. For example, to install Ruby 3.0.0, use: rbenv install 3.0.0 Once the installation is complete, set the global Ruby version: rbenv global 3.0.0 Verify that Ruby is installed correctly: ruby -v ","date":"09-04-2020","objectID":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/:2:0","tags":["ruby","mac"],"title":"Troubleshooting Unable to Install Ruby via rbenv Fixing Errors with cctools","uri":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/#step-2-install-ruby-via-rbenv"},{"categories":["Development"],"collections":null,"content":"Conclusion You should now have Ruby installed successfully on your system using rbenv, and the error related to \u0026ldquo;cctools\u0026rdquo; should be resolved. rbenv allows you to manage multiple Ruby versions on your machine easily, making it a convenient tool for developers working on different projects with varying Ruby requirements. ","date":"09-04-2020","objectID":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/:3:0","tags":["ruby","mac"],"title":"Troubleshooting Unable to Install Ruby via rbenv Fixing Errors with cctools","uri":"/posts/development/troubleshooting-unable-to-install-ruby-via-rbenv-fixing-errors-with-cctools/#conclusion"},{"categories":["Development"],"collections":null,"content":"Google Chrome is one of the most popular web browsers, and you may want to install or upgrade it on your Ubuntu Server. However, since Ubuntu Server doesn\u0026rsquo;t have a graphical user interface (GUI), you\u0026rsquo;ll need to install Chrome via the command line. This guide will walk you through the process of installing Google Chrome and keeping it up to date on your Ubuntu Server. ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:0:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have the following: An Ubuntu Server: You should have a running Ubuntu Server with terminal access. Sudo Privileges: You should have sudo privileges or be logged in as the root user. ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:1:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Update Package Lists It\u0026rsquo;s a good practice to start by updating the package lists to ensure you have the latest information about available packages. Open your terminal and run: sudo apt update ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:2:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-1-update-package-lists"},{"categories":["Development"],"collections":null,"content":"Step 2: Install Dependencies Google Chrome requires some dependencies to be installed on your system. You can install these dependencies by running: sudo apt install wget curl unzip -y ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:3:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-2-install-dependencies"},{"categories":["Development"],"collections":null,"content":"Step 3: Download Google Chrome Now, you\u0026rsquo;ll need to download the Google Chrome package for Ubuntu. You can use the wget command to do this. Replace the URL below with the appropriate one for your architecture (32-bit or 64-bit): For 64-bit systems: wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb For 32-bit systems (note that 32-bit Chrome is deprecated by Google and may not be available in the future): wget https://dl.google.com/linux/direct/google-chrome-stable_current_i386.deb ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:4:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-3-download-google-chrome"},{"categories":["Development"],"collections":null,"content":"Step 4: Install Google Chrome After downloading the package, you can install Google Chrome using the dpkg command: sudo dpkg -i google-chrome-stable_current_*.deb If you encounter any dependency issues, you can run: sudo apt install -f This command will automatically install any missing dependencies. ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:5:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-4-install-google-chrome"},{"categories":["Development"],"collections":null,"content":"Step 5: Verify Installation To verify that Google Chrome is installed, simply run: google-chrome --version This should display the installed Chrome version. ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:6:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-5-verify-installation"},{"categories":["Development"],"collections":null,"content":"Step 6: Upgrade Google Chrome (Optional) To upgrade Google Chrome to the latest version, you can use the following commands: sudo apt update sudo apt upgrade google-chrome-stable ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:7:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-6-upgrade-google-chrome-optional"},{"categories":["Development"],"collections":null,"content":"Step 7: Start Google Chrome Since you\u0026rsquo;re using a headless server, you can use Google Chrome in headless mode for tasks such as web scraping or automated testing. You can launch Google Chrome in headless mode with the following command: google-chrome --headless --disable-gpu --no-sandbox https://example.com Replace https://example.com with the URL you want to open in headless mode. That\u0026rsquo;s it! You\u0026rsquo;ve successfully installed and optionally upgraded Google Chrome on your Ubuntu Server via the terminal. Remember to keep Chrome up to date regularly to benefit from security updates and new features. You can automate this process with a cron job if needed. ","date":"08-04-2020","objectID":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/:8:0","tags":null,"title":"How to Install and Upgrade Google Chrome Browser on Ubuntu Server via Terminal","uri":"/posts/development/how-to-install-and-upgrade-google-chrome-browser-on-ubuntu-server-via-terminal/#step-7-start-google-chrome"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re encountering an error related to ffi while installing CocoaPods on macOS Catalina, you can follow these steps to resolve the issue. ","date":"07-04-2020","objectID":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/:0:0","tags":["mac"],"title":"Troubleshooting CocoaPods Installation Error on macOS Catalina","uri":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 1: Install FFI with a specific version** sudo gem install ffi -v \u0026#39;1.10.0\u0026#39; ","date":"07-04-2020","objectID":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/:1:0","tags":["mac"],"title":"Troubleshooting CocoaPods Installation Error on macOS Catalina","uri":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/#step-1-install-ffi-with-a-specific-version"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 2: Install CocoaPods** sudo gem install cocoapods -n /usr/local/bin ","date":"07-04-2020","objectID":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/:2:0","tags":["mac"],"title":"Troubleshooting CocoaPods Installation Error on macOS Catalina","uri":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/#step-2-install-cocoapods"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 3: Install CocoaPods Binary** sudo gem install cocoapods-binary If you\u0026rsquo;re still facing issues after following the above steps, you can try uninstalling all gems and then reinstalling CocoaPods. Here\u0026rsquo;s how you can do it: ","date":"07-04-2020","objectID":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/:3:0","tags":["mac"],"title":"Troubleshooting CocoaPods Installation Error on macOS Catalina","uri":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/#step-3-install-cocoapods-binary"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 4: Uninstall all gems** sudo gem list --no-version | xargs -L 1 sudo gem uninstall -n /usr/local/bin -ax After running the above command, you\u0026rsquo;ll be prompted to uninstall each gem individually. Press y and hit Enter for each prompt. Once you have uninstalled all gems, proceed to reinstall CocoaPods using Step 2 mentioned earlier. This should help resolve the ffi error and allow you to successfully install and use CocoaPods on macOS Catalina. Note: Make sure you have Xcode and its command-line tools installed on your system before proceeding with the above steps. ","date":"07-04-2020","objectID":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/:4:0","tags":["mac"],"title":"Troubleshooting CocoaPods Installation Error on macOS Catalina","uri":"/posts/development/troubleshooting-cocoapods-installation-error-on-macos-catalina/#step-4-uninstall-all-gems"},{"categories":["Development"],"collections":null,"content":"If you encounter an error related to \u0026ldquo;ffi\u0026rdquo; when trying to install CocoaPods on macOS Catalina, follow these steps to resolve the issue: Install ffi version 1.10.0: sudo gem install ffi -v \u0026#39;1.10.0\u0026#39; Install CocoaPods: sudo gem install cocoapods -n /usr/local/bin Install CocoaPods-binary: sudo gem install cocoapods-binary If you still face issues after following the above steps, it\u0026rsquo;s recommended to uninstall all gems and then reinstall them: sudo gem list --no-version | xargs -L 1 sudo gem uninstall -n /usr/local/bin -ax Note: The command above will uninstall all gems from the default gem path. If you have gems installed in other locations, you may need to handle those manually. After performing the steps above, you should be able to install and use CocoaPods without encountering the \u0026ldquo;ffi\u0026rdquo; error on macOS Catalina. ","date":"07-04-2020","objectID":"/posts/development/fixing-ffi-error-when-installing-cocoapods-on-macos-catalina/:0:0","tags":["mac"],"title":"Fixing ffi error when installing CocoaPods on macOS Catalina","uri":"/posts/development/fixing-ffi-error-when-installing-cocoapods-on-macos-catalina/#"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re facing the issue where Disk Cleanup Pro consistently removes your iCloud profile picture and you want to prevent this from happening, here\u0026rsquo;s a solution that involves excluding specific caches from being deleted. ","date":"26-03-2020","objectID":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/:0:0","tags":["mac"],"title":"Disk Cleanup Pro Keeps Removing iCloud Profile Picture","uri":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/#"},{"categories":["Software"],"collections":null,"content":"Problem Description Disk Cleanup Pro is a utility program designed to help users free up disk space on their computers by removing unnecessary files and caches. However, it appears to be deleting the iCloud profile picture, causing inconvenience for users who wish to keep their profile picture intact. ","date":"26-03-2020","objectID":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/:1:0","tags":["mac"],"title":"Disk Cleanup Pro Keeps Removing iCloud Profile Picture","uri":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/#problem-description"},{"categories":["Software"],"collections":null,"content":"Solution To avoid Disk Cleanup Pro from deleting the iCloud profile picture, we can add an exception to prevent the deletion of the cache associated with the macOS System Preferences. ","date":"26-03-2020","objectID":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/:2:0","tags":["mac"],"title":"Disk Cleanup Pro Keeps Removing iCloud Profile Picture","uri":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/#solution"},{"categories":["Software"],"collections":null,"content":"Step-by-Step Guide Identify the Cache Directory First, we need to locate the cache directory that contains the iCloud profile picture cache. In this case, the cache responsible for storing macOS System Preferences data is located at: ~/Library/Caches/com.apple.systempreferences Exclude the Cache Directory Now, we\u0026rsquo;ll configure Disk Cleanup Pro to exclude the specified cache directory so that it won\u0026rsquo;t be deleted during the cleanup process. Launch Disk Cleanup Pro on your Mac. Navigate to the settings or preferences section of the application. Look for an option that allows you to exclude specific folders or directories from being cleaned up. Add the Exclusion Click on the \u0026ldquo;Add\u0026rdquo; or \u0026ldquo;Exclude\u0026rdquo; button (the wording might vary depending on the app\u0026rsquo;s interface). Browse to the following location: ~/Library/Caches/com.apple.systempreferences Select the cache directory mentioned above and add it to the exclusion list. Save Settings and Confirm Save the settings to apply the exclusion. Confirm that the specified cache directory is now excluded from the cleanup process. ","date":"26-03-2020","objectID":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/:3:0","tags":["mac"],"title":"Disk Cleanup Pro Keeps Removing iCloud Profile Picture","uri":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/#step-by-step-guide"},{"categories":["Software"],"collections":null,"content":"Conclusion By adding the iCloud profile picture cache directory to the exclusion list in Disk Cleanup Pro, you should be able to retain your profile picture without it being removed during cleanup operations. Please note that the steps mentioned here are based on the assumption that Disk Cleanup Pro provides an option to exclude specific directories from cleanup. If the application does not offer this functionality, you may want to consider an alternative cleanup utility or contact the developer\u0026rsquo;s support for assistance. Always exercise caution when configuring cleanup tools to avoid unintended data loss or system issues. Make sure to back up important files before making any changes to the system. ","date":"26-03-2020","objectID":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/:4:0","tags":["mac"],"title":"Disk Cleanup Pro Keeps Removing iCloud Profile Picture","uri":"/posts/software/disk-cleanup-pro-keeps-removing-icloud-profile-picture/#conclusion"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re encountering an issue with an incorrect comma on the decimal in GNUCash for Mac Big Sur, don\u0026rsquo;t worry! Here\u0026rsquo;s a step-by-step guide to help you address this problem. ","date":"18-03-2020","objectID":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/:0:0","tags":["mac","gnucash"],"title":"Incorrect comma on decimal in GNUCash for Mac Big Sur","uri":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Method 1: Terminal Commands Open the Terminal application on your Mac. You can find it in the \u0026ldquo;Utilities\u0026rdquo; folder within the \u0026ldquo;Applications\u0026rdquo; folder. Run the following command in the Terminal: defaults write -app Gnucash AppleLocale \u0026#39;en_DE@currency=IDR\u0026#39; This command sets the AppleLocale for GNUCash to \u0026rsquo;en_DE\u0026rsquo; with the currency as \u0026lsquo;IDR\u0026rsquo;. To delete the AppleLocale setting, use the following command: defaults delete -app Gnucash AppleLocale Please note that the above steps might not work on Big Sur. In that case, you can try the following alternative method: ","date":"18-03-2020","objectID":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/:1:0","tags":["mac","gnucash"],"title":"Incorrect comma on decimal in GNUCash for Mac Big Sur","uri":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/#method-1-terminal-commands"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Method 2: Recovery Mode Shutdown your Mac. Hold the CMD+R keys and press the Power button to start your Mac. Continue holding CMD+R until you see the Apple logo. Login using your credentials. Go to \u0026ldquo;Utilities\u0026rdquo; in the top menu bar and select \u0026ldquo;Terminal.\u0026rdquo; In the Terminal, enter the following commands one by one: csrutil disable reboot After rebooting, login to your Mac. Open Terminal again. Enter the following commands: sudo mount -uw / sudo cd /usr/share/locale sudo cp -R en_US en_ID cd en_ID rm LC_MONETARY cp ../en_US.ISO8859-1/LC_MONETARY ./ sudo vim LC_MONETARY In the Vim editor, replace commas (,) with periods (.) and periods (.) with commas (,) using the find and replace feature. To do this, type the following command in Vim: :%s/,/\\./g :%s/\\./,/g Save the changes and exit Vim. To save and exit, type the following commands in Vim: :wq Start GNUCash, and the decimal commas should now be corrected. Feel free to use this information as a guide to fix the incorrect decimal comma issue in GNUCash on Mac Big Sur. ","date":"18-03-2020","objectID":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/:2:0","tags":["mac","gnucash"],"title":"Incorrect comma on decimal in GNUCash for Mac Big Sur","uri":"/posts/software/incorrect-comma-on-decimal-in-gnucash-for-mac-big-sur/#method-2-recovery-mode"},{"categories":["Development"],"collections":null,"content":"Time Machine is a powerful backup tool on macOS that automatically creates local snapshots of your data in addition to regular backups on an external drive. These local snapshots can be helpful, but they can also take up valuable disk space. If you need to delete a local Time Machine snapshot on your Mac, follow these steps. ","date":"04-03-2020","objectID":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/:0:0","tags":null,"title":"How to Delete a Local Time Machine Snapshot on macOS","uri":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Open Terminal To delete a local snapshot, you\u0026rsquo;ll need to use Terminal, which is a command-line interface on macOS. You can find Terminal in the \u0026ldquo;Utilities\u0026rdquo; folder within the \u0026ldquo;Applications\u0026rdquo; folder. ","date":"04-03-2020","objectID":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/:1:0","tags":null,"title":"How to Delete a Local Time Machine Snapshot on macOS","uri":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/#step-1-open-terminal"},{"categories":["Development"],"collections":null,"content":"Step 2: List Local Snapshots Before you delete a local snapshot, it\u0026rsquo;s a good idea to list the available snapshots to ensure you\u0026rsquo;re targeting the correct one. In Terminal, enter the following command: tmutil listlocalsnapshots / This command will display a list of local snapshots along with their dates and times. ","date":"04-03-2020","objectID":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/:2:0","tags":null,"title":"How to Delete a Local Time Machine Snapshot on macOS","uri":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/#step-2-list-local-snapshots"},{"categories":["Development"],"collections":null,"content":"Step 3: Delete the Local Snapshot Once you\u0026rsquo;ve identified the local snapshot you want to delete, use the following command to delete it (replace YYYY-MM-DD-HHMMSS with the actual date and time of the snapshot you want to remove): sudo tmutil deletelocalsnapshots YYYY-MM-DD-HHMMSS For example, if you want to delete a snapshot created on March 1, 2018, at 00:20:10, you would enter: sudo tmutil deletelocalsnapshots 2018-03-01-002010 You\u0026rsquo;ll be prompted to enter your administrator password because the sudo command requires elevated privileges. ","date":"04-03-2020","objectID":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/:3:0","tags":null,"title":"How to Delete a Local Time Machine Snapshot on macOS","uri":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/#step-3-delete-the-local-snapshot"},{"categories":["Development"],"collections":null,"content":"Step 4: Verify Deletion After entering your password, Terminal will process the deletion of the snapshot. Once it\u0026rsquo;s complete, you can use the tmutil listlocalsnapshots / command again to confirm that the snapshot has been successfully deleted. That\u0026rsquo;s it! You\u0026rsquo;ve successfully deleted a local Time Machine snapshot on your Mac, freeing up disk space for other purposes. Remember that local snapshots are typically automatically managed by macOS, so you may not need to delete them manually unless you have specific reasons to do so. ","date":"04-03-2020","objectID":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/:4:0","tags":null,"title":"How to Delete a Local Time Machine Snapshot on macOS","uri":"/posts/development/how-to-delete-a-local-time-machine-snapshot-on-macos/#step-4-verify-deletion"},{"categories":["DevOps"],"collections":null,"content":"When using Ubuntu, it\u0026rsquo;s important to keep your system clean and organized by removing orphaned packages and unwanted dependencies. One useful tool for managing packages in Ubuntu is debfoster. In this article, we\u0026rsquo;ll explore how to use debfoster to keep only essential packages, remove unwanted dependencies, and maintain a tidy Ubuntu installation. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:0:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#"},{"categories":["DevOps"],"collections":null,"content":"Installation By default, debfoster may not be installed on your Ubuntu system. To install it, open a terminal and run the following command: sudo apt-get install debfoster Once installed, you\u0026rsquo;re ready to start using debfoster to manage your packages. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:1:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#installation"},{"categories":["DevOps"],"collections":null,"content":"Creating a List of Essential Packages Before removing any packages, it\u0026rsquo;s a good idea to create a list of essential packages that you want to keep. debfoster provides an easy way to generate this list. Open a terminal and run the following command: debfoster -q This command generates a list of packages that are considered essential based on their dependencies and usage on your system. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:2:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#creating-a-list-of-essential-packages"},{"categories":["DevOps"],"collections":null,"content":"Viewing the Essential Packages List To view the list of essential packages generated by debfoster, you have two options: Option 1: Using debfoster\u0026rsquo;s -a flag debfoster -a This command will display the list of essential packages in the terminal. Option 2: Accessing the file directly cat /var/lib/debfoster/keepers This command displays the content of the file that stores the list of essential packages generated by debfoster. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:3:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#viewing-the-essential-packages-list"},{"categories":["DevOps"],"collections":null,"content":"Identifying Orphaned Packages Orphaned packages are packages that are no longer needed by any other package on your system. These packages can take up disk space and clutter your system. To identify orphaned packages, you can use the following command: debfoster -s This command lists all the orphaned packages on your system. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:4:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#identifying-orphaned-packages"},{"categories":["DevOps"],"collections":null,"content":"Forcefully Removing Unwanted Packages Once you have identified the orphaned packages or unwanted dependencies, you can use debfoster to force-remove them from your system. The -f flag is used for this purpose. Run the following command to remove the unwanted packages: debfoster -f This command will remove the packages that are not on the essential list generated by debfoster. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:5:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#forcefully-removing-unwanted-packages"},{"categories":["DevOps"],"collections":null,"content":"Regular Maintenance with debfoster To ensure your system stays clean and organized, it\u0026rsquo;s a good practice to regularly run debfoster. By running the following command, debfoster will check for any orphaned packages or unwanted dependencies and prompt you for removal: debfoster This command provides a convenient way to keep your system free from unnecessary packages and maintain optimal performance. ","date":"12-02-2020","objectID":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/:6:0","tags":["linux"],"title":"Managing Essential Packages in Ubuntu with debfoster","uri":"/posts/devops/managing-essential-packages-in-ubuntu-with-debfoster/#regular-maintenance-with-debfoster"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"When installing Appium on Ubuntu using the root user, you may encounter a permission error. However, this issue can be resolved by following a few steps. This article will guide you through the process of installing Appium successfully on Ubuntu using the root user. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:0:0","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Prerequisites Before proceeding with the installation, make sure you have the following prerequisites: Ubuntu operating system. Node.js and npm (Node Package Manager) installed. Root access or sudo privileges. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:1:0","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#prerequisites"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Installation Steps To install Appium on Ubuntu using the root user, follow the steps below: ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:2:0","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#installation-steps"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 1: Update System Packages Start by updating the system packages to ensure you have the latest versions. Open a terminal and execute the following command: sudo apt update ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:2:1","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#step-1-update-system-packages"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 2: Install Appium and Appium Doctor To install Appium and Appium Doctor, you can use npm (Node Package Manager). Run the following command to install both packages globally: sudo npm install -g appium appium-doctor --unsafe-perm=true --allow-root Note: The --unsafe-perm=true and --allow-root flags are necessary to bypass permission issues when installing with the root user. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:2:2","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#step-2-install-appium-and-appium-doctor"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 3: Verify Appium Installation After the installation is complete, you can verify if Appium is successfully installed by running the following command: appium-doctor This command will perform a series of checks to ensure that all the required dependencies for Appium are properly installed. If there are any missing dependencies or configuration issues, appium-doctor will provide guidance on how to resolve them. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:2:3","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#step-3-verify-appium-installation"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 4: Configure Appium Once Appium is installed, you may need to configure it according to your specific requirements. This involves setting up device emulators, connecting physical devices, or configuring the desired capabilities for automation. Please refer to the Appium documentation for detailed instructions on how to configure and use Appium for your specific use case. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:2:4","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#step-4-configure-appium"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Conclusion By following these steps, you should be able to install Appium successfully on Ubuntu using the root user, bypassing any permission errors. Remember to update the system packages, use the appropriate npm flags, and verify the installation using appium-doctor. Appium is a powerful tool for automating mobile application testing, and with the correct installation, you can leverage its features to streamline your testing processes. ","date":"10-02-2020","objectID":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/:3:0","tags":["linux","appium"],"title":"Installing Appium with Root User on Ubuntu - Permission Error","uri":"/posts/software/installing-appium-with-root-user-on-ubuntu---permission-error/#conclusion"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"When trying to install Xcode, one common issue that users encounter is running out of disk space. This can be frustrating, especially if you need Xcode for development purposes. However, there are steps you can take to free up disk space and resolve this issue. One effective method is to remove the com.apple.appstore folder from the ~/Library/Caches directory. This article will guide you through the process of deleting this folder to make space for Xcode installation. ","date":"14-01-2020","objectID":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/:0:0","tags":["mac","xcode"],"title":"How to Free Up Disk Space When Unable to Install Xcode","uri":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Steps to Remove the com.apple.appstore Folder Please follow the steps below to remove the com.apple.appstore folder and free up disk space on your Mac: Open the Terminal application. You can find it by searching for \u0026ldquo;Terminal\u0026rdquo; in Spotlight or navigating to \u0026ldquo;Applications\u0026rdquo; \u0026gt; \u0026ldquo;Utilities\u0026rdquo; \u0026gt; \u0026ldquo;Terminal\u0026rdquo;. In the Terminal window, type the following command and press Enter: rm -rf ~/Library/Caches/com.apple.appstore This command will remove the com.apple.appstore folder from the ~/Library/Caches directory. The rm -rf command is used to forcefully remove the folder and its contents. If you\u0026rsquo;re prompted to confirm the deletion, type y and press Enter. Wait for the command to complete. Depending on the size of the folder, it may take some time to remove all the files. ","date":"14-01-2020","objectID":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/:1:0","tags":["mac","xcode"],"title":"How to Free Up Disk Space When Unable to Install Xcode","uri":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/#steps-to-remove-the-comappleappstore-folder"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Verifying Disk Space Availability After deleting the com.apple.appstore folder, you can verify if enough disk space has been freed up. To check the available disk space on your Mac, follow these steps: Click on the Apple menu in the top-left corner of the screen and select \u0026ldquo;About This Mac.\u0026rdquo; In the window that appears, click on the \u0026ldquo;Storage\u0026rdquo; tab. You will see a visual representation of your disk space usage, along with the amount of space available. Ensure that the available space has increased after deleting the com.apple.appstore folder. ","date":"14-01-2020","objectID":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/:2:0","tags":["mac","xcode"],"title":"How to Free Up Disk Space When Unable to Install Xcode","uri":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/#verifying-disk-space-availability"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion By following the steps outlined in this article, you should be able to free up disk space on your Mac by removing the com.apple.appstore folder from the ~/Library/Caches directory. This will allow you to proceed with the installation of Xcode successfully. Remember to periodically clear unnecessary files and folders from your system to maintain sufficient disk space for future installations and operations. ","date":"14-01-2020","objectID":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/:3:0","tags":["mac","xcode"],"title":"How to Free Up Disk Space When Unable to Install Xcode","uri":"/posts/development/how-to-free-up-disk-space-when-unable-to-install-xcode/#conclusion"},{"categories":["Development"],"collections":null,"content":"Here\u0026rsquo;s a table outlining some of the key differences between programmers, developers, and software engineers: Programmer Developer Software Engineer Focus Writing code Building software systems Designing and developing complex software systems Skills Expertise in one or more programming languages Proficiency in a wide range of programming languages and technologies Strong foundation in computer science principles and software engineering methodologies Responsibility Implementing code based on specifications Developing software solutions that meet business needs and solve problems Ensuring software systems meet functional and non-functional requirements and are scalable, reliable, and maintainable Role Typically works on smaller projects or specific programming tasks Works on larger software projects, collaborating with a team to create a complete solution Often takes a leadership role in software development projects and is responsible for the design, architecture, and implementation of complex software systems Education Can have a degree in computer science, but not always required Usually has a degree in computer science, software engineering, or a related field Typically has a degree in software engineering or computer science with a focus on software engineering Job Titles Programmer Analyst, Application Programmer, Software Developer Full Stack Developer, Web Developer, Mobile Developer Software Engineer, Senior Software Engineer, Lead Software Engineer ","date":"29-12-2019","objectID":"/posts/development/difference-between-programmer-developer-and-software-engineer/:0:0","tags":[],"title":"Difference Between Programmer, Developer and Software Engineer","uri":"/posts/development/difference-between-programmer-developer-and-software-engineer/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If your iPhone keeps connecting and disconnecting from your computer or charger, it can be a frustrating experience. This issue can be caused by various factors, including software glitches or hardware problems. In this article, we will walk you through some troubleshooting steps to help you resolve the issue. We will also discuss two permanent solutions: resetting the System Management Controller (SMC) and resetting the Non-Volatile Random-Access Memory (NVRAM) on your iPhone. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:0:0","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Troubleshooting Steps Before proceeding with the permanent solutions, it\u0026rsquo;s important to go through some basic troubleshooting steps. Follow these steps to troubleshoot the issue: Check the cable and adapter: Ensure that you are using a genuine Apple cable and adapter. Faulty or third-party accessories may cause connectivity problems. Try a different USB port: Connect your iPhone to a different USB port on your computer. Sometimes, a faulty USB port can cause connection issues. Restart your iPhone and computer: Restart both your iPhone and computer to refresh their systems. This can resolve temporary software glitches that may be causing the problem. Update your iPhone and computer: Make sure your iPhone and computer are running the latest software updates. Outdated software can sometimes cause compatibility issues. Reset network settings on your iPhone: Go to \u0026ldquo;Settings\u0026rdquo; \u0026gt; \u0026ldquo;General\u0026rdquo; \u0026gt; \u0026ldquo;Reset\u0026rdquo; \u0026gt; \u0026ldquo;Reset Network Settings\u0026rdquo; on your iPhone. This will reset all network-related settings, including Wi-Fi, Bluetooth, and VPN configurations. Note that you will need to reconnect to Wi-Fi networks and reconfigure other network settings after performing this reset. Disable USB selective suspend (Windows): If you are using a Windows computer, follow these steps to disable USB selective suspend: Press Windows + X and select Device Manager from the menu. Expand the Universal Serial Bus controllers section. Right-click on each USB Root Hub and select Properties. Go to the Power Management tab and uncheck the option that says \u0026ldquo;Allow the computer to turn off this device to save power\u0026rdquo;. Repeat this for all USB Root Hubs listed. Try a different computer or charger: Connect your iPhone to a different computer or charger to see if the issue persists. This can help identify if the problem is specific to your current setup. If the issue still persists after trying the troubleshooting steps mentioned above, you can proceed with the following permanent solutions. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:1:0","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#troubleshooting-steps"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Resetting the System Management Controller (SMC) Resetting the SMC can help resolve various hardware-related issues on your iPhone, including connectivity problems. Follow these steps to reset the SMC: For iPhone models with a Home button: Press and hold the Home button and the Sleep/Wake (Power) button simultaneously. Continue holding both buttons until the screen turns off and the Apple logo appears. Release the buttons when the Apple logo appears. Your iPhone will restart. For iPhone models without a Home button: Press and quickly release the Volume Up button. Press and quickly release the Volume Down button. Press and hold the Side button until the Apple logo appears. Release the button when the Apple logo appears. Your iPhone will restart. After performing the SMC reset, check if the connectivity issue is resolved. If not, proceed to the next permanent solution. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:2:0","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#resetting-the-system-management-controller-smc"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Resetting the Non-Volatile Random-Access Memory (NVRAM) On macOS, NVRAM (also known as PRAM or Parameter RAM) is a small amount of memory that retains settings and data even when your Mac is powered off. Sometimes, resetting NVRAM can help resolve issues like: Forgotten login passwords Display settings not saving Keyboard shortcuts not working System preferences not applying To reset NVRAM on your Mac, follow these steps: ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:3:0","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#resetting-the-non-volatile-random-access-memory-nvram"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Method 1: Using the Power Button Shut down your Mac. Locate the power button (it\u0026rsquo;s usually on the top or side of the laptop). Press and hold the power button for exactly 10 seconds. Release the power button and turn on your Mac as usual. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:3:1","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#method-1-using-the-power-button"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Method 2: Using a Keyboard Shortcut Shut down your Mac. Immediately press and hold the Command (⌘) key along with the Option (⌥) and P keys. Hold these keys for 10 seconds. Release the keys and turn on your Mac as usual. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:3:2","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#method-2-using-a-keyboard-shortcut"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Method 3: Using System Information Shut down your Mac. Restart your Mac while holding the Shift key. As soon as you hear the boot chime, release the Shift key. Immediately open System Information (you can find it in the Utilities folder). Click on the NVRAM tab. Click Reset NVRAM to confirm. After resetting NVRAM, your Mac may take a few seconds longer to boot up due to the process of reinitializing system settings. Once you\u0026rsquo;re back in macOS, you\u0026rsquo;ll need to reset your login password and any other settings that were stored in NVRAM. ","date":"27-12-2019","objectID":"/posts/software/iphone-keeps-connecting-and-disconnecting/:3:3","tags":["mac","ios"],"title":"iPhone Keeps Connecting and Disconnecting","uri":"/posts/software/iphone-keeps-connecting-and-disconnecting/#method-3-using-system-information"},{"categories":["Development"],"collections":null,"content":"If you need to reset your Mac\u0026rsquo;s Dock settings to their default values, you can use the following commands: defaults delete com.apple.dock killall Dock This will remove any customizations you\u0026rsquo;ve made to your Dock and return it to its default configuration. ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:0:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#"},{"categories":["Development"],"collections":null,"content":"Set Mac Dock Icon Size You can adjust the icon size of your Mac\u0026rsquo;s Dock to your preference. Here are commands to set different icon sizes: ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:0:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#set-mac-dock-icon-size"},{"categories":["Development"],"collections":null,"content":"Large defaults write com.apple.dock tilesize -int 64 killall Dock This will make the Dock icons larger. ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:1:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#large"},{"categories":["Development"],"collections":null,"content":"Default defaults write com.apple.dock tilesize -int 48 killall Dock This will set the Dock icons to the default size. ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:2:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#default"},{"categories":["Development"],"collections":null,"content":"Small defaults write com.apple.dock tilesize -int 32 killall Dock This will make the Dock icons smaller. ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:3:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#small"},{"categories":["Development"],"collections":null,"content":"Tiny defaults write com.apple.dock tilesize -int 8 killall Dock This will make the Dock icons very small. You can choose the size that suits your preference by running the respective command. Make sure to run the corresponding \u0026ldquo;killall Dock\u0026rdquo; command after setting the size to apply the changes immediately. ","date":"28-11-2019","objectID":"/posts/development/reset-mac-dock-settings/:4:0","tags":null,"title":"Reset Mac Dock Settings","uri":"/posts/development/reset-mac-dock-settings/#tiny"},{"categories":["Development"],"collections":null,"content":"To create large files on macOS, you can use the Terminal app. Follow these steps: Open Terminal: You can find Terminal in the Utilities folder within the Applications folder, or simply search for it using Spotlight. Navigate to the Desired Directory: Use the cd command to navigate to the directory where you want to create the large file. For example, to navigate to your home directory, use: cd ~ Create the Large File: To create a large file filled with zeroes, you can use the dd command. The following command creates a file named \u0026ldquo;hugefile\u0026rdquo; with a block size of 100MB: dd if=/dev/zero of=hugefile bs=100m count=1 if: Input file, in this case, /dev/zero provides a continuous stream of null bytes. of: Output file, the name of the file you want to create. bs: Block size, determines the size of each block. count: Number of blocks to be copied. In this case, we\u0026rsquo;re copying a single block of 100MB. Removing the Large File: After you\u0026rsquo;re done with the large file, you can remove it using the rm command: rm hugefile ","date":"01-11-2019","objectID":"/posts/development/creating-large-files-using-terminal/:0:0","tags":null,"title":"Creating Large Files Using Terminal","uri":"/posts/development/creating-large-files-using-terminal/#"},{"categories":["Development"],"collections":null,"content":"Important Considerations While creating large files can be useful for testing and experimentation, it\u0026rsquo;s important to keep a few things in mind: Disk Space: Creating large files can quickly fill up your disk space. Ensure you have enough available space before proceeding. Data Loss: Always back up important data before creating or manipulating files. Mistakes can result in data loss. Purpose: Only create large files for legitimate purposes. Creating files to intentionally fill up disk space on someone else\u0026rsquo;s computer is unethical and potentially illegal. Permissions: Ensure you have the necessary permissions to create files in the chosen directory. ","date":"01-11-2019","objectID":"/posts/development/creating-large-files-using-terminal/:1:0","tags":null,"title":"Creating Large Files Using Terminal","uri":"/posts/development/creating-large-files-using-terminal/#important-considerations"},{"categories":["Development"],"collections":null,"content":"Alternative Methods If you\u0026rsquo;re looking to create large files for testing purposes without consuming excessive disk space, consider using tools specifically designed for this purpose. For instance, you can use the mkfile command to create a file of a specific size without actually filling it with data: mkfile 1g testfile This command creates a 1GB file named \u0026ldquo;testfile\u0026rdquo; without actually allocating 1GB of disk space. ","date":"01-11-2019","objectID":"/posts/development/creating-large-files-using-terminal/:2:0","tags":null,"title":"Creating Large Files Using Terminal","uri":"/posts/development/creating-large-files-using-terminal/#alternative-methods"},{"categories":["Development"],"collections":null,"content":"Conclusion Creating large files on macOS can be useful for various purposes, including testing and experimentation. By using the Terminal, you can easily create and manage large files. However, be cautious about the amount of disk space you use and always have a legitimate reason for creating large files. If you\u0026rsquo;re simply looking to test storage limits, consider using dedicated tools that create files without consuming real disk space. ","date":"01-11-2019","objectID":"/posts/development/creating-large-files-using-terminal/:3:0","tags":null,"title":"Creating Large Files Using Terminal","uri":"/posts/development/creating-large-files-using-terminal/#conclusion"},{"categories":["Development"],"collections":null,"content":"When you delete files from your Mac\u0026rsquo;s hard drive, they are not immediately erased; instead, the space they occupied is marked as available for new data. Until new data overwrites that space, the old files can potentially be recovered using specialized software. To ensure that your sensitive data is completely irrecoverable, you can use the diskutil secureErase command in the Terminal. Important Note: The following instructions involve using Terminal commands, which can have serious consequences if used incorrectly. Make sure to follow the steps carefully and double-check the commands before executing them. ","date":"03-10-2019","objectID":"/posts/development/securely-erasing-free-space-on-a-mac-hard-drive/:0:0","tags":null,"title":"Securely Erasing Free Space on a Mac Hard Drive","uri":"/posts/development/securely-erasing-free-space-on-a-mac-hard-drive/#"},{"categories":["Development"],"collections":null,"content":"Steps to Securely Erase Free Space Open Terminal: You can find the Terminal application in the Utilities folder within your Applications folder, or you can use Spotlight to search for it. Identify the Disk Name: Replace [Disk Name] in the command with the actual name of the disk for which you want to securely erase the free space. To find the correct disk name, you can use the command diskutil list. This will display a list of all connected drives, and you can identify your target drive from there. Run the Command: Once you\u0026rsquo;ve identified the disk name, run the following command: sudo diskutil secureErase freespace 0 \u0026#34;/Volumes/[Disk Name]\u0026#34; Here, sudo gives you superuser privileges required for the operation, diskutil is the command-line utility, secureErase initiates the secure erase process, freespace specifies that you want to erase the free space, and 0 indicates the level of security (0 is a single-pass erase). Enter Your Password: When prompted, enter your administrator password. You won\u0026rsquo;t see the password characters as you type – this is normal. Confirm the Action: After entering the password, the Terminal will ask for confirmation. Type y and press Enter to confirm that you want to proceed. Process Completion: The secure erase process will begin. The time it takes depends on the size of the free space you\u0026rsquo;re erasing. Once the process is complete, you\u0026rsquo;ll see a message indicating success. It\u0026rsquo;s important to note that this process only erases the free space on your drive, not the existing files. If you want to securely erase specific files or the entire disk, additional steps are required. For more detailed information and to understand the technical aspects of securely erasing a Mac hard drive, you can refer to the Backblaze blog post. Warning: Securely erasing free space permanently deletes any recoverable data in that space. Make sure to back up any important data before proceeding. Remember to use these commands with caution and ensure you\u0026rsquo;re performing the operation on the correct drive. If you\u0026rsquo;re uncomfortable with using Terminal commands, consider seeking assistance from someone with more technical expertise. ","date":"03-10-2019","objectID":"/posts/development/securely-erasing-free-space-on-a-mac-hard-drive/:0:1","tags":null,"title":"Securely Erasing Free Space on a Mac Hard Drive","uri":"/posts/development/securely-erasing-free-space-on-a-mac-hard-drive/#steps-to-securely-erase-free-space"},{"categories":["Development"],"collections":null,"content":"SSH tunneling is a powerful technique for securely forwarding network traffic from one machine to another. It\u0026rsquo;s commonly used to access resources on a remote server as if they were local, especially in scenarios where you need to securely access services behind a firewall or from a different network. To ensure the SSH tunnel remains stable and automatically reconnects when there are interruptions, you can use tools like autossh in combination with server and client-side configurations. Here\u0026rsquo;s a step-by-step guide on setting up SSH tunneling with auto-reconnect on both the server and client sides. ","date":"01-10-2019","objectID":"/posts/development/ssh-tunneling-auto-reconnect-alive/:0:0","tags":null,"title":"SSH Tunneling Auto Reconnect Alive","uri":"/posts/development/ssh-tunneling-auto-reconnect-alive/#"},{"categories":["Development"],"collections":null,"content":"Server Side Configuration First, let\u0026rsquo;s configure the SSH server to keep the connection alive and handle potential disconnects gracefully. SSH into your server: ssh root@origin.example.net Open the SSH server configuration file using a text editor (e.g., Vim): vim /etc/ssh/sshd_config Add the following lines to the configuration file: ClientAliveInterval 5 ClientAliveCountMax 15 These lines set the server to send a \u0026ldquo;keep-alive\u0026rdquo; message to the client every 5 seconds and disconnect the client if there are 15 consecutive failed responses. Save the changes and exit the text editor. Restart the SSH server to apply the new settings: systemctl restart sshd ","date":"01-10-2019","objectID":"/posts/development/ssh-tunneling-auto-reconnect-alive/:1:0","tags":null,"title":"SSH Tunneling Auto Reconnect Alive","uri":"/posts/development/ssh-tunneling-auto-reconnect-alive/#server-side-configuration"},{"categories":["Development"],"collections":null,"content":"Client Side Configuration Now, let\u0026rsquo;s configure the client-side settings and use autossh to automatically reconnect if the SSH tunnel is interrupted. On the client machine, use the following autossh command to create an SSH tunnel. This command forwards traffic from port 8080 on the client to port 8022 on the server: autossh -M 0 -N -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 -R 8022:localhost:8080 -p 666 root@origin.example.net -M 0: Disables monitoring to prevent conflicts with ServerAliveInterval. -N: Tells SSH not to execute any remote commands. -o ExitOnForwardFailure=yes: Terminates the autossh session if port forwarding fails. -o ServerAliveInterval=5: Sends a \u0026ldquo;keep-alive\u0026rdquo; message to the server every 5 seconds. -o ServerAliveCountMax=3: Disconnects if no response is received after three consecutive \u0026ldquo;keep-alive\u0026rdquo; messages. -R 8022:localhost:8080: Sets up a reverse SSH tunnel from the server\u0026rsquo;s port 8022 to the client\u0026rsquo;s port 8080. -p 666: Specifies the SSH server\u0026rsquo;s port (replace with your actual SSH port). root@origin.example.net: Replace with the appropriate SSH username and server address. If you want to run autossh in the background, add the -f option: autossh -M 0 -Nf -o ExitOnForwardFailure=yes -o ServerAliveInterval=5 -o ServerAliveCountMax=3 -R 8022:localhost:8080 -p 666 root@origin.example.net The -f option forks autossh into the background. With these server and client-side configurations, your SSH tunnel should remain stable and automatically reconnect if there are any interruptions in the connection. This setup is particularly useful for maintaining persistent tunnels, such as when accessing web services or databases on a remote server securely. ","date":"01-10-2019","objectID":"/posts/development/ssh-tunneling-auto-reconnect-alive/:2:0","tags":null,"title":"SSH Tunneling Auto Reconnect Alive","uri":"/posts/development/ssh-tunneling-auto-reconnect-alive/#client-side-configuration"},{"categories":["Software"],"collections":null,"content":"Sometimes, when trying to uninstall a program, you might encounter an error message stating that the uninstallation is blocked due to a missing original MSI installer. This issue can be frustrating, but there are steps you can take to resolve it. Microsoft provides a tool to help fix problems that block programs from being installed or removed. Here\u0026rsquo;s how to use it: Download the Troubleshooter: Visit the following Microsoft support page to download the \u0026ldquo;Program Install and Uninstall Troubleshooter\u0026rdquo;: Program Install and Uninstall Troubleshooter Run the Troubleshooter: Once downloaded, locate the troubleshooter executable file and run it. You might be prompted to provide administrator privileges. Select the Problem: The troubleshooter will present you with options to select the problem you\u0026rsquo;re facing. In this case, you\u0026rsquo;re trying to uninstall a program that\u0026rsquo;s being blocked due to a missing MSI installer. Choose the appropriate option related to uninstallation. Identify the Program: The troubleshooter will then list the programs installed on your computer. Locate the program you\u0026rsquo;re having trouble uninstalling and select it. Follow the Steps: The troubleshooter will guide you through the steps to resolve the issue. It might try to repair the program\u0026rsquo;s installation or assist in the removal process. Follow the on-screen instructions. Restart if Required: After the troubleshooter completes its process, it might recommend restarting your computer. If prompted, go ahead and restart. Attempt Uninstallation: Once your computer has restarted, try uninstalling the program again using the usual methods (Control Panel, Settings, etc.). The troubleshooter should help resolve the issue of missing original MSI installer and allow you to successfully uninstall the problematic program. Remember, while the troubleshooter is a helpful tool, it might not work in all cases. If you\u0026rsquo;re still facing issues after using the troubleshooter, you might need to consider alternative methods for uninstallation, such as using third-party uninstaller software or seeking help from the program\u0026rsquo;s support resources. By following these steps, you should be able to overcome the \u0026ldquo;Error Uninstalling Due to Missing Original MSI Installer\u0026rdquo; and successfully remove the program causing the issue. ","date":"22-09-2019","objectID":"/posts/software/how-to-fix-error-uninstalling-due-to-missing-original-msi-installer/:0:0","tags":["windows"],"title":"How to Fix Error Uninstalling Due to Missing Original MSI Installer","uri":"/posts/software/how-to-fix-error-uninstalling-due-to-missing-original-msi-installer/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"If you are encountering issues while updating Xcode on your Mac, specifically the error message \u0026ldquo;Not Enough Space,\u0026rdquo; despite having ample storage available, there might be a workaround you can try. In some cases, Time Machine local snapshots could be occupying space on your system, preventing the update from proceeding. The following steps outline a potential solution to clear local snapshots and allow the Xcode update to proceed smoothly. ","date":"21-09-2019","objectID":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/:0:0","tags":["mac","xcode"],"title":"Unable to Update Xcode Due to Insufficient Space Even with Sufficient Storage","uri":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Clearing Time Machine Local Snapshots Open Terminal on your Mac. You can find it in the Applications \u0026gt; Utilities folder or use Spotlight search to locate it quickly. In the Terminal window, enter the following command to create a large file, which will help clear the local snapshots: dd if=/dev/zero of=./hugefile bs=100mThis command creates a file named \u0026ldquo;hugefile\u0026rdquo; with a size of 100MB. The \u0026ldquo;bs=100m\u0026rdquo; parameter sets the block size for each write operation. You can adjust the block size as per your requirements. Monitor the file creation process until it reaches a size of at least 18GB. This size allows sufficient space for Time Machine to clear the local snapshots. You can keep an eye on the file\u0026rsquo;s size by executing the following command in a separate Terminal window: du -h ./hugefileOnce the file reaches the desired size, proceed to the next step. To cancel the file creation process, press Ctrl-C in the Terminal window where you executed the dd command. Remove the huge file using the following command: rm ./hugefileThis command will delete the file from your system. ","date":"21-09-2019","objectID":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/:1:0","tags":["mac","xcode"],"title":"Unable to Update Xcode Due to Insufficient Space Even with Sufficient Storage","uri":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/#clearing-time-machine-local-snapshots"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Updating Xcode via the App Store After completing the steps mentioned above to clear the Time Machine local snapshots, you can now attempt to update Xcode through the App Store: Launch the App Store on your Mac. You can find it in the Applications folder or use Spotlight search to locate it quickly. In the App Store, navigate to the Updates tab. Look for the Xcode update and click on the Update button next to it. The update process will begin, and the cleared space should now allow the update to proceed without encountering the previous \u0026ldquo;Not Enough Space\u0026rdquo; error. By following these steps, you can address the issue of being unable to update Xcode due to inadequate space, even when you have sufficient storage available on your Mac. ","date":"21-09-2019","objectID":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/:2:0","tags":["mac","xcode"],"title":"Unable to Update Xcode Due to Insufficient Space Even with Sufficient Storage","uri":"/posts/development/unable-to-update-xcode-due-to-insufficient-space-even-with-sufficient-storage/#updating-xcode-via-the-app-store"},{"categories":["Development"],"collections":null,"content":"You want to change the URL of your WordPress website from \u0026rsquo;example.com\u0026rsquo; to \u0026rsquo;target-example.com\u0026rsquo;. To achieve this, you can follow the provided SQL commands. However, it\u0026rsquo;s essential to be cautious when making direct database changes like this, as it can potentially break your WordPress site if not done correctly. Always back up your database before making any changes. Here\u0026rsquo;s the provided code in Markdown format for better readability: ### Change WordPress URL To change the WordPress URL from \u0026#39;example.com\u0026#39; to \u0026#39;target-example.com\u0026#39;, you can use SQL commands to update the necessary database tables. Before proceeding, make sure to back up your WordPress database for safety. 1. Update the `wp_options` table: ```sql UPDATE wp_options SET option_value = replace(option_value, \u0026#39;example.com\u0026#39;, \u0026#39;target-example.com\u0026#39;) WHERE option_name = \u0026#39;home\u0026#39; OR option_name = \u0026#39;siteurl\u0026#39;; Update the wp_posts table for post GUIDs: UPDATE wp_posts SET guid = replace(guid, \u0026#39;example.com\u0026#39;,\u0026#39;target-example.com\u0026#39;); Update the wp_posts table for post content: UPDATE wp_posts SET post_content = replace(post_content, \u0026#39;example.com\u0026#39;, \u0026#39;target-example.com\u0026#39;); Update the wp_postmeta table for meta values: UPDATE wp_postmeta SET meta_value = replace(meta_value,\u0026#39;example.com\u0026#39;,\u0026#39;target-example.com\u0026#39;); After running these SQL commands, your WordPress site\u0026rsquo;s URL references should be updated to \u0026rsquo;target-example.com\u0026rsquo;. Remember to perform this operation carefully, and it\u0026rsquo;s always a good practice to create a database backup before making such changes. Please ensure you have a recent backup of your WordPress database and access to your database management tool before proceeding with these SQL commands. Additionally, make sure to replace \u0026#39;example.com\u0026#39; and \u0026#39;target-example.com\u0026#39; with your actual URLs.","date":"20-09-2019","objectID":"/posts/development/change-wordpress-url/:0:0","tags":null,"title":"Change Wordpress URL","uri":"/posts/development/change-wordpress-url/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re looking to update Chrome Remote Desktop within a Docker container, you can follow these steps: Access Your Docker Container: First, use the docker exec command to access the Docker container where Chrome Remote Desktop is installed. Replace container_name with the actual name of your container. docker exec -it container_name /bin/bash Download the Updated Package: Once inside the container, utilize wget to download the latest Chrome Remote Desktop package. wget http://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb Install the Package: Install the updated package using the dpkg command. dpkg -i chrome-remote-desktop_current_amd64.deb Resolve Dependencies: Address any unmet dependencies by employing apt-get. For instance: apt-get -f install Restart the Chrome Remote Desktop Service: Restart the Chrome Remote Desktop service to apply the changes. systemctl restart chrome-remote-desktop Exit the Container: After updating and restarting the service, exit the Docker container. exit Ensure that you replace container_name with your actual container\u0026rsquo;s name. Remember, it\u0026rsquo;s prudent to have a backup or snapshot of your Docker container before performing updates to mitigate unforeseen issues. ","date":"12-09-2019","objectID":"/posts/development/updating-chrome-remote-desktop-on-docker/:0:0","tags":null,"title":"Updating Chrome Remote Desktop on Docker","uri":"/posts/development/updating-chrome-remote-desktop-on-docker/#"},{"categories":["Development"],"collections":null,"content":"Time Machine is a backup solution developed by Apple for macOS. You can set up a Time Machine backup destination on your Ubuntu Server 18.04 using Samba 4.8 to allow Macs on your network to back up their data. Here\u0026rsquo;s a step-by-step guide to help you achieve this: ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:0:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#"},{"categories":["Development"],"collections":null,"content":"1. Install Samba 4.8 First, install Samba 4.8 by adding the PPA repository and updating your package list: sudo add-apt-repository ppa:linux-schools/samba-latest sudo apt update sudo apt install samba ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:1:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#1-install-samba-48"},{"categories":["Development"],"collections":null,"content":"2. Configure Samba Remove the old Samba configuration and create a new one: sudo mv /etc/samba/smb.conf /etc/samba/smb.confORG sudo nano /etc/samba/smb.conf Paste the following configuration into the file: [global] server role = standalone server passdb backend = tdbsam obey pam restrictions = yes security = user printcap name = /dev/null load printers = no socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=524288 SO_SNDBUF=524288 server string = Samba Server %v dns proxy = no wide links = yes follow symlinks = yes unix extensions = no acl allow execute always = yes # Special configuration for Apple\u0026#39;s Time Machine fruit:model = MacPro fruit:advertise_fullsync = true fruit:aapl = yes fruit:time machine = yes [backup] path = /space/backups valid users = backups writable = yes durable handles = yes kernel oplocks = no kernel share modes = no posix locking = no vfs objects = catia fruit streams_xattr ea support = yes browseable = yes read only = No inherit acls = yes ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:2:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#2-configure-samba"},{"categories":["Development"],"collections":null,"content":"3. Create Avahi Service for Time Machine Create and edit the Avahi service file for Time Machine: sudo nano /etc/avahi/services/timemachine.service Paste the following content into the file: \u0026lt;?xml version=\u0026#34;1.0\u0026#34; standalone=\u0026#39;no\u0026#39;?\u0026gt; \u0026lt;!DOCTYPE service-group SYSTEM \u0026#34;avahi-service.dtd\u0026#34;\u0026gt; \u0026lt;service-group\u0026gt; \u0026lt;name replace-wildcards=\u0026#34;yes\u0026#34;\u0026gt;%h\u0026lt;/name\u0026gt; \u0026lt;service\u0026gt; \u0026lt;type\u0026gt;_smb._tcp\u0026lt;/type\u0026gt; \u0026lt;port\u0026gt;445\u0026lt;/port\u0026gt; \u0026lt;/service\u0026gt; \u0026lt;service\u0026gt; \u0026lt;type\u0026gt;_device-info._tcp\u0026lt;/type\u0026gt; \u0026lt;port\u0026gt;0\u0026lt;/port\u0026gt; \u0026lt;txt-record\u0026gt;model=RackMac\u0026lt;/txt-record\u0026gt; \u0026lt;/service\u0026gt; \u0026lt;service\u0026gt; \u0026lt;type\u0026gt;_adisk._tcp\u0026lt;/type\u0026gt; \u0026lt;txt-record\u0026gt;sys=waMa=0,adVF=0x100\u0026lt;/txt-record\u0026gt; \u0026lt;txt-record\u0026gt;dk0=adVN=backup,adVF=0x82\u0026lt;/txt-record\u0026gt; \u0026lt;/service\u0026gt; \u0026lt;/service-group\u0026gt; ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:3:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#3-create-avahi-service-for-time-machine"},{"categories":["Development"],"collections":null,"content":"4. Start Samba and Avahi Create and edit the /etc/rc.local file to ensure Samba starts at boot: sudo nano /etc/rc.local Add the following lines: #!/bin/bash echo \u0026#34;Starting Samba from rc.local\u0026#34; smbd exit 0 ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:4:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#4-start-samba-and-avahi"},{"categories":["Development"],"collections":null,"content":"5. Create Backup User and Directory Create the backup user, set the SMB password, and create the backup directory with appropriate permissions: sudo useradd -m backups sudo smbpasswd -a backups sudo mkdir -p /space/backups sudo chown backups /space/backups sudo chmod 700 /space/backups ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:5:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#5-create-backup-user-and-directory"},{"categories":["Development"],"collections":null,"content":"6. Restart Services Restart Avahi and Samba to apply the changes: sudo /etc/init.d/avahi-daemon restart sudo smbd Now, your Ubuntu Server 18.04 is set up as a Time Machine backup destination for your Macs on the network. Remember to enable encrypted backups for data security. To view the progress of your Mac backups, you can use the following command: log stream --style syslog --predicate \u0026#39;senderImagePath contains[cd] \u0026#34;TimeMachine\u0026#34;\u0026#39; --info Please make sure your server and network are properly configured for this setup to work smoothly. ","date":"10-09-2019","objectID":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/:6:0","tags":null,"title":"Setting Up Time Machine Backup on Ubuntu Server 18.04 with Samba 4.8","uri":"/posts/development/setting-up-time-machine-backup-on-ubuntu-server-1804-with-samba-48/#6-restart-services"},{"categories":["Software"],"collections":null,"content":"When using Elementor, you may encounter a problem where the column width appears incorrect when applying a negative margin value to a section. This issue typically occurs when you set a negative margin for the section and expect the column width to adjust accordingly. However, Elementor doesn\u0026rsquo;t automatically recalculate the column width in this scenario. To resolve this problem, you can follow these steps: Edit the page with Elementor. Locate the section where you have applied the negative margin. Within that section, find the column widget that is affected by the incorrect width. Click on the column widget to select it. In the Elementor left panel, navigate to the Advanced tab. Scroll down to the Custom CSS section. Add the following CSS code to adjust the column width: selector { width: auto !important; padding-left: 10px; padding-right: 10px; } Replace selector with the appropriate CSS selector for your column. It may be something like .elementor-column-wrap. Adjust the padding-left and padding-right values according to your design requirements. This CSS code will override the column width calculation and set it to \u0026ldquo;auto.\u0026rdquo; Additionally, it adds padding to the left and right sides of the column to create a visually pleasing layout. Remember to replace selector with the correct CSS selector for your column. If you\u0026rsquo;re not sure about the selector, you can inspect the column using your browser\u0026rsquo;s developer tools to find the appropriate class or ID. By following these steps and applying the CSS code, you should be able to fix the column width issue that occurs when using negative section margins in Elementor. ","date":"30-08-2019","objectID":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/:0:0","tags":["wordpress"],"title":"Column Width Issue with Negative Section Margin in Elementor","uri":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/#"},{"categories":["Software"],"collections":null,"content":"Waiting for Element to Render Centered Before It Becomes Visible To achieve the effect of waiting for an element to render in the center before making it visible using animations, you can utilize the following steps in Elementor: Edit the page with Elementor. Select the element you want to apply the animation to. In the Elementor left panel, navigate to the Advanced tab. Locate the Motion Effects section and click on it. Enable the \u0026ldquo;Entrance Animation\u0026rdquo; option. Choose the desired animation from the \u0026ldquo;Animation\u0026rdquo; dropdown menu. Set the \u0026ldquo;Delay\u0026rdquo; value to a sufficient duration to allow the element to render and center. Adjust other animation settings as needed, such as duration, intensity, and so on. By adding a delay to the animation, you can ensure that the element remains hidden until it has rendered in the center of the page, creating a smooth and visually appealing effect. ","date":"30-08-2019","objectID":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/:1:0","tags":["wordpress"],"title":"Column Width Issue with Negative Section Margin in Elementor","uri":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/#waiting-for-element-to-render-centered-before-it-becomes-visible"},{"categories":["Software"],"collections":null,"content":"Converting to Markdown as an Article To convert the provided information into Markdown format for an article, here\u0026rsquo;s an example: ","date":"30-08-2019","objectID":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/:2:0","tags":["wordpress"],"title":"Column Width Issue with Negative Section Margin in Elementor","uri":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/#converting-to-markdown-as-an-article"},{"categories":["Software"],"collections":null,"content":"Column Width Issue with Negative Section Margin in Elementor When using Elementor, you might encounter a problem where the column width appears incorrect when applying a negative margin value to a section. This issue typically occurs when you set a negative margin for the section and expect the column width to adjust accordingly. However, Elementor doesn\u0026rsquo;t automatically recalculate the column width in this scenario. To resolve this problem, you can follow these steps: Edit the page with Elementor. Locate the section where you have applied the negative margin. Within that section, find the column widget that is affected by the incorrect width. Click on the column widget to select it. In the Elementor left panel, navigate to the Advanced tab. Scroll down to the Custom CSS section. Add the following CSS code to adjust the column width: selector { width: auto !important; padding-left: 10px; padding-right: 10px; } Adjust the padding-left and padding-right values according to your design requirements. This CSS code will override the column width calculation and set it to \u0026ldquo;auto.\u0026rdquo; Additionally, it adds padding to the left and right sides of the column to create a visually pleasing layout. Remember to replace selector with the correct CSS selector for your column. If you\u0026rsquo;re not sure about the selector, you can inspect the column using your browser\u0026rsquo;s developer tools to find the appropriate class or ID. By following these steps and applying the CSS code, you should be able to fix the column width issue that occurs when using negative section margins in Elementor. ","date":"30-08-2019","objectID":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/:2:1","tags":["wordpress"],"title":"Column Width Issue with Negative Section Margin in Elementor","uri":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/#column-width-issue-with-negative-section-margin-in-elementor"},{"categories":["Software"],"collections":null,"content":"Waiting for Element to Render Centered Before It Becomes Visible To achieve the effect of waiting for an element to render in the center before making it visible using animations, you can utilize the following steps in Elementor: Edit the page with Elementor. Select the element you want to apply the animation to. In the Elementor left panel, navigate to the Advanced tab. Locate the Motion Effects section and click on it. Enable the \u0026ldquo;Entrance Animation\u0026rdquo; option. Choose the desired animation from the \u0026ldquo;Animation\u0026rdquo; dropdown menu. Set the \u0026ldquo;Delay\u0026rdquo; value to a sufficient duration to allow the element to render and center. Adjust other animation settings as needed, such as duration, intensity, and so on. By adding a delay to the animation, you can ensure that the element remains hidden until it has rendered in the center of the page, creating a smooth and visually appealing effect. ","date":"30-08-2019","objectID":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/:2:2","tags":["wordpress"],"title":"Column Width Issue with Negative Section Margin in Elementor","uri":"/posts/software/column-width-issue-with-negative-section-margin-in-elementor/#waiting-for-element-to-render-centered-before-it-becomes-visible-1"},{"categories":["Development"],"collections":null,"content":"In Linux, Logical Volume Management (LVM) allows you to dynamically resize logical volumes to efficiently manage your storage. If you have free space available in your Volume Group and want to expand a logical volume to use this space, follow these steps: ","date":"26-08-2019","objectID":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/:0:0","tags":null,"title":"How to Resize an LVM Logical Volume to Use Maximum Free Space","uri":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/#"},{"categories":["Development"],"collections":null,"content":"Checking Disk Space Before resizing the logical volume, it\u0026rsquo;s essential to check the available free space and the current usage using the df -h command: df -h This command will display information about your filesystems, including their sizes, used space, and available space. ","date":"26-08-2019","objectID":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/:1:0","tags":null,"title":"How to Resize an LVM Logical Volume to Use Maximum Free Space","uri":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/#checking-disk-space"},{"categories":["Development"],"collections":null,"content":"Extending the Logical Volume Assuming you have identified the logical volume you want to resize, you can extend it to use all available free space. Replace /dev/mapper/ubuntu--vg-ubuntu--lv with the path to your logical volume in the following command: lvextend -l+100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv lvextend is the command used to extend logical volumes. -l+100%FREE specifies that you want to use 100% of the available free space. /dev/mapper/ubuntu--vg-ubuntu--lv should be replaced with the path to your logical volume. ","date":"26-08-2019","objectID":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/:2:0","tags":null,"title":"How to Resize an LVM Logical Volume to Use Maximum Free Space","uri":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/#extending-the-logical-volume"},{"categories":["Development"],"collections":null,"content":"Resizing the Filesystem After extending the logical volume, you need to resize the filesystem to make use of the additional space. Use the resize2fs command for this purpose: resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv resize2fs is the command to resize the ext2, ext3, or ext4 filesystem. /dev/mapper/ubuntu--vg-ubuntu--lv should be replaced with the path to your logical volume. ","date":"26-08-2019","objectID":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/:3:0","tags":null,"title":"How to Resize an LVM Logical Volume to Use Maximum Free Space","uri":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/#resizing-the-filesystem"},{"categories":["Development"],"collections":null,"content":"Verifying the Changes To ensure that the resizing was successful and that the filesystem now uses the maximum available space, you can once again use the df -h command: df -h This will display the updated information about your filesystems, including the increased size and available space. By following these steps, you can easily resize an LVM logical volume to make use of the maximum free space available in your Volume Group. Remember to replace /dev/mapper/ubuntu--vg-ubuntu--lv with the appropriate path to your logical volume. ","date":"26-08-2019","objectID":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/:4:0","tags":null,"title":"How to Resize an LVM Logical Volume to Use Maximum Free Space","uri":"/posts/development/how-to-resize-an-lvm-logical-volume-to-use-maximum-free-space/#verifying-the-changes"},{"categories":["DevOps"],"collections":null,"content":"In some cases, users have reported issues where Docker containers continue to have open ports even after configuring the Uncomplicated Firewall (UFW) to block them. This can be a frustrating experience, but there are a few potential solutions to investigate. ","date":"25-08-2019","objectID":"/posts/devops/docker-port-still-open-despite-ufw-blocking/:0:0","tags":["linux"],"title":"Docker Port Still Open Despite UFW Blocking","uri":"/posts/devops/docker-port-still-open-despite-ufw-blocking/#"},{"categories":["DevOps"],"collections":null,"content":"1. Using 127.0.0.1 as the Host IP One possible cause for this issue is the configuration of the Docker container\u0026rsquo;s network settings. By default, Docker containers run in their own isolated network environment, separate from the host machine. When specifying a port in the format 127.0.0.1:12100 in your Docker Compose .env file, you are binding the container port to the loopback interface on the container itself, rather than the host machine. Consequently, UFW might not have any control over this loopback interface. To resolve this, consider using the host machine\u0026rsquo;s IP address instead of 127.0.0.1. Replace 127.0.0.1:12100 with \u0026lt;host_machine_ip\u0026gt;:12100 in your .env file. This change will bind the container port directly to the host machine\u0026rsquo;s IP, allowing UFW to have control over it. ","date":"25-08-2019","objectID":"/posts/devops/docker-port-still-open-despite-ufw-blocking/:1:0","tags":["linux"],"title":"Docker Port Still Open Despite UFW Blocking","uri":"/posts/devops/docker-port-still-open-despite-ufw-blocking/#1-using-127001-as-the-host-ip"},{"categories":["DevOps"],"collections":null,"content":"2. Installing UFW-Docker Another option to manage Docker container ports with UFW is by utilizing the ufw-docker package. This package integrates UFW and Docker, enabling better control over the firewall rules for Docker containers. To install ufw-docker, you can follow these steps: Ensure that you have UFW and Docker installed on your system. Clone the ufw-docker repository from GitHub by running the following command: git clone https://github.com/chaifeng/ufw-docker.git Change into the cloned directory: cd ufw-docker Install ufw-docker using the provided installation script: sudo ./install.sh Once the installation is complete, you can manage Docker container ports using UFW as expected. The ufw-docker package takes care of associating Docker containers with specific firewall rules, allowing you to block or allow ports as needed. It\u0026rsquo;s worth noting that these solutions assume that UFW is properly configured and functioning correctly on your system. Ensure that UFW is enabled and that its rules are properly set up to block or allow desired ports. Hopefully, one of these approaches will help you address the issue of Docker containers having open ports despite UFW blocking them. ","date":"25-08-2019","objectID":"/posts/devops/docker-port-still-open-despite-ufw-blocking/:2:0","tags":["linux"],"title":"Docker Port Still Open Despite UFW Blocking","uri":"/posts/devops/docker-port-still-open-despite-ufw-blocking/#2-installing-ufw-docker"},{"categories":["Development"],"collections":null,"content":"When dealing with a directory full of subdirectories, there might be instances where you want to package each subdirectory\u0026rsquo;s contents into separate compressed files. This can be particularly useful for organization, sharing, or backup purposes. One way to achieve this is by utilizing the power of shell commands, and specifically, the tar command along with some other utilities. ","date":"24-08-2019","objectID":"/posts/development/creating-compressed-archives-for-each-folder-with-title/:0:0","tags":null,"title":"Creating Compressed Archives for Each Folder with Title","uri":"/posts/development/creating-compressed-archives-for-each-folder-with-title/#"},{"categories":["Development"],"collections":null,"content":"The Command Breakdown The command in question accomplishes this task through a combination of the find and tar commands. Here\u0026rsquo;s a breakdown of each component and its role: find . \\( ! -regex \u0026#39;./..\u0026#39; \\) -type d -maxdepth 1 -mindepth 1 -exec tar czpvf {}.tar.gz {} ; find .: The command starts with find, a versatile utility that searches for files and directories within a specified path. \\( ! -regex './..' \\): This part ensures that the search includes only subdirectories and not the parent directory (..). It\u0026rsquo;s achieved through a regular expression exclusion. -type d: This option instructs find to search only for directories. -maxdepth 1: Limits the depth of the search to the current directory and its immediate subdirectories. -mindepth 1: Excludes the starting directory itself from the search. -exec tar czpvf {}.tar.gz {} ;: For each selected subdirectory, this portion executes the tar command to create a compressed archive. The {} placeholders are replaced with the subdirectory\u0026rsquo;s name. c: Create a new archive. z: Compress the archive using gzip. p: Preserve file permissions. v: Verbosely list the files processed. f: Specifies the filename of the archive. ","date":"24-08-2019","objectID":"/posts/development/creating-compressed-archives-for-each-folder-with-title/:0:1","tags":null,"title":"Creating Compressed Archives for Each Folder with Title","uri":"/posts/development/creating-compressed-archives-for-each-folder-with-title/#the-command-breakdown"},{"categories":["Development"],"collections":null,"content":"Practical Application Imagine you have a directory called projects containing subdirectories like project1, project2, and so on. Running the provided command within the projects directory would result in the creation of compressed archives named project1.tar.gz, project2.tar.gz, and so forth. Each archive would contain the contents of its respective subdirectory. ","date":"24-08-2019","objectID":"/posts/development/creating-compressed-archives-for-each-folder-with-title/:0:2","tags":null,"title":"Creating Compressed Archives for Each Folder with Title","uri":"/posts/development/creating-compressed-archives-for-each-folder-with-title/#practical-application"},{"categories":["Development"],"collections":null,"content":"Caution and Considerations Before applying such a command in a live environment, it\u0026rsquo;s recommended to perform a trial run on a smaller scale. This prevents unintended consequences and ensures that the command behaves as expected. Additionally, be mindful of the potentially large number of compressed files that could be generated, especially if you have numerous subdirectories. In conclusion, the command serves as a powerful tool to efficiently organize and compress directory contents into separate archives, facilitating better data management and distribution. ","date":"24-08-2019","objectID":"/posts/development/creating-compressed-archives-for-each-folder-with-title/:0:3","tags":null,"title":"Creating Compressed Archives for Each Folder with Title","uri":"/posts/development/creating-compressed-archives-for-each-folder-with-title/#caution-and-considerations"},{"categories":["Development"],"collections":null,"content":"To create a modal with Elementor in the OceanWP theme using the \u0026ldquo;Anywhere Elementor\u0026rdquo; plugin, you can follow these steps: Install and Activate Plugins: First, make sure you have the OceanWP theme and Elementor plugin installed and activated on your WordPress website. Install and activate the \u0026ldquo;Anywhere Elementor\u0026rdquo; plugin if you haven\u0026rsquo;t already. You can find and install it from the WordPress Plugin Repository. Create a Modal: In your WordPress dashboard, go to the \u0026ldquo;Templates\u0026rdquo; section under \u0026ldquo;Elementor.\u0026rdquo; Click on \u0026ldquo;Add New\u0026rdquo; to create a new template for your modal. Give your template a name and click \u0026ldquo;Create Template.\u0026rdquo; Design Your Modal: You\u0026rsquo;ll be taken to the Elementor editor. Design your modal content as you desire using Elementor\u0026rsquo;s drag-and-drop interface. Add the content you want inside the modal, and customize it to your liking. You can add text, images, buttons, or any other Elementor widgets. Add Shortcode: To add a shortcode inside your modal, you can use the \u0026ldquo;Shortcode\u0026rdquo; widget provided by Elementor. Drag and drop the \u0026ldquo;Shortcode\u0026rdquo; widget to the location where you want to insert your shortcode. Paste your shortcode inside the \u0026ldquo;Shortcode\u0026rdquo; widget and customize its settings if needed. Set Modal Trigger URL: Save your modal template. Now, you need to specify where and how this modal should appear. To open the modal, you\u0026rsquo;ll set a trigger URL. Edit the page or post where you want the modal to be triggered. Add a link or a button that will trigger the modal. For the URL, use #omw-355 (as you specified in your request). In the link settings, add the class omw-open-modal to the \u0026ldquo;CSS Classes\u0026rdquo; field. This class is used to trigger the modal. Publish Your Page/Post: Update or publish your page or post. Now, when you click the link or button on the page/post with the class omw-open-modal and URL #omw-355, it should trigger the modal you created with Elementor, displaying the content you designed inside it. Remember to clear your browser cache or test in an incognito window if you encounter any issues to ensure you\u0026rsquo;re seeing the most up-to-date version of your site. ","date":"23-08-2019","objectID":"/posts/development/add-elementor-on-wordpress-oceanwp-modal/:0:0","tags":null,"title":"Add Elementor on Wordpress OceanWP Modal","uri":"/posts/development/add-elementor-on-wordpress-oceanwp-modal/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re using Elementor and want to create an infinite animation loop for an element using the Elementor Motion Effects feature, you can achieve this by applying some custom CSS classes. In this tutorial, we\u0026rsquo;ll guide you through the process step by step. ","date":"23-08-2019","objectID":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/:0:0","tags":null,"title":"How to Create an Infinite Animation Loop in Elementor Motion","uri":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Add the Custom CSS To create an infinite animation loop, you\u0026rsquo;ll need to add the following custom CSS code to your WordPress website. This code sets the animation-iteration-count property to infinite, which means the animation will loop indefinitely: .animate-infinity { animation-iteration-count: infinite; } You can add this CSS code to your WordPress theme\u0026rsquo;s style.css file or use a custom CSS plugin if you prefer. Make sure to save your changes. ","date":"23-08-2019","objectID":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/:1:0","tags":null,"title":"How to Create an Infinite Animation Loop in Elementor Motion","uri":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/#step-1-add-the-custom-css"},{"categories":["Development"],"collections":null,"content":"Step 2: Create or Edit an Elementor Section Now, go to the page where you want to add the infinite animation loop using Elementor. Edit the page with Elementor and locate the section or widget where you want to apply the animation. ","date":"23-08-2019","objectID":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/:2:0","tags":null,"title":"How to Create an Infinite Animation Loop in Elementor Motion","uri":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/#step-2-create-or-edit-an-elementor-section"},{"categories":["Development"],"collections":null,"content":"Step 3: Add the Animation Classes Select the Element: Click on the element (widget or section) to which you want to apply the infinite animation loop. Open the Elementor Motion Effects Panel: In the Elementor editor, on the left-hand side, you should see the \u0026ldquo;Advanced\u0026rdquo; tab. Click on it to open the advanced settings. Add the CSS Classes: In the advanced settings panel, you\u0026rsquo;ll see an option labeled \u0026ldquo;CSS Classes.\u0026rdquo; Enter the animate-infinity class into the input field. This class is what we defined in the custom CSS earlier. Update/Publish: After adding the CSS class, make sure to save your changes by clicking the \u0026ldquo;Update\u0026rdquo; or \u0026ldquo;Publish\u0026rdquo; button, depending on your editing mode. ","date":"23-08-2019","objectID":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/:3:0","tags":null,"title":"How to Create an Infinite Animation Loop in Elementor Motion","uri":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/#step-3-add-the-animation-classes"},{"categories":["Development"],"collections":null,"content":"Step 4: Preview Your Animation Now, when you view your page, the selected element should have an infinite animation loop applied to it. That\u0026rsquo;s it! You\u0026rsquo;ve successfully created an infinite animation loop on an Elementor element using custom CSS classes and the Elementor Motion Effects feature. You can further customize the animation properties by adjusting the animation settings within Elementor\u0026rsquo;s Motion Effects panel. Remember to clear your browser cache or use an incognito window if you don\u0026rsquo;t immediately see the changes take effect. ","date":"23-08-2019","objectID":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/:4:0","tags":null,"title":"How to Create an Infinite Animation Loop in Elementor Motion","uri":"/posts/development/how-to-create-an-infinite-animation-loop-in-elementor-motion/#step-4-preview-your-animation"},{"categories":["Development"],"collections":null,"content":"To modify the URL on the WordPress OceanWP logo banner, you can follow these steps: ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:0","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a Child Theme Creating a child theme is essential to ensure that your customizations are not lost when the parent theme is updated. Here\u0026rsquo;s how to create a child theme: Create a New Folder: In your WordPress themes directory (wp-content/themes/), create a new folder for your child theme. Give it a unique name, like \u0026ldquo;oceanwp-child.\u0026rdquo; Create a style.css File: Inside the child theme folder, create a style.css file. Add the following header information to it: /* Theme Name: OceanWP Child Description: Child theme for OceanWP Author: Your Name Template: oceanwp Version: 1.0 */ Create a functions.php File: In the child theme folder, create a functions.php file. This file will be used to enqueue your custom CSS and make other modifications. ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:1","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#step-1-create-a-child-theme"},{"categories":["Development"],"collections":null,"content":"Step 2: Export Settings from the Parent OceanWP Theme In your WordPress dashboard, go to \u0026ldquo;Theme Panel\u0026rdquo; under \u0026ldquo;OceanWP.\u0026rdquo; Navigate to the \u0026ldquo;Import/Export\u0026rdquo; tab. Click on the \u0026ldquo;Export\u0026rdquo; button to save your parent theme\u0026rsquo;s settings to a JSON file. ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:2","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#step-2-export-settings-from-the-parent-oceanwp-theme"},{"categories":["Development"],"collections":null,"content":"Step 3: Import Settings to OceanWP Child Theme In your WordPress dashboard, go to \u0026ldquo;Appearance\u0026rdquo; \u0026gt; \u0026ldquo;Theme Panel\u0026rdquo; under your OceanWP Child Theme. Navigate to the \u0026ldquo;Import/Export\u0026rdquo; tab. Click on the \u0026ldquo;Choose File\u0026rdquo; button to upload the JSON file you exported in the previous step. Click on the \u0026ldquo;Import\u0026rdquo; button to import the settings into your child theme. ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:3","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#step-3-import-settings-to-oceanwp-child-theme"},{"categories":["Development"],"collections":null,"content":"Step 4: Edit functions.php and Add Logo URL Modification Function In your child theme\u0026rsquo;s functions.php file, you can add a function to modify the logo URL. Here\u0026rsquo;s an example function to change the logo URL: function modify_logo_url($html) { $custom_url = \u0026#39;https://your-custom-url.com\u0026#39;; // Replace with your desired URL $html = preg_replace(\u0026#39;/href=[\u0026#34;\\\u0026#39;]([^\u0026#34;\\\u0026#39;]+)[\u0026#34;\\\u0026#39;]/i\u0026#39;, \u0026#39;href=\u0026#34;\u0026#39; . $custom_url . \u0026#39;\u0026#34;\u0026#39;, $html); return $html; } add_filter(\u0026#39;ocean_logo_url\u0026#39;, \u0026#39;modify_logo_url\u0026#39;); Replace 'https://your-custom-url.com' with the URL you want the logo to link to. This function hooks into the ocean_logo_url filter provided by OceanWP and replaces the logo\u0026rsquo;s URL with your custom URL. Make sure to save the changes to your functions.php file. ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:4","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#step-4-edit-functionsphp-and-add-logo-url-modification-function"},{"categories":["Development"],"collections":null,"content":"Activate the Child Theme After creating the child theme and making the necessary modifications, you can activate the child theme from your WordPress dashboard. Your custom logo URL should now be active on your OceanWP website\u0026rsquo;s logo banner. Remember to back up your website and test the changes in a staging environment before making them on a live site to avoid any potential issues. ","date":"23-08-2019","objectID":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/:0:5","tags":null,"title":"Modify URL on Wordpress OceanWP Logo Banner","uri":"/posts/development/modify-url-on-wordpress-oceanwp-logo-banner/#activate-the-child-theme"},{"categories":["Development"],"collections":null,"content":"When setting up an Apache web server, it\u0026rsquo;s important to handle requests that don\u0026rsquo;t match any specific hostname. This is achieved through a default VirtualHost configuration. Learn how to create a default VirtualHost to efficiently manage such requests. \u0026lt;VirtualHost *:80\u0026gt; ServerName default DocumentRoot /var/www/default \u0026lt;Directory /var/www/default\u0026gt; Options FollowSymLinks AllowOverride None Require all granted \u0026lt;/Directory\u0026gt; \u0026lt;/VirtualHost\u0026gt; In this guide, you\u0026rsquo;ll discover how to set up the default VirtualHost configuration, enabling your server to handle unmatched requests effectively. This approach ensures a seamless user experience and proper organization of your web server. ","date":"22-08-2019","objectID":"/posts/development/creating-a-default-virtualhost-configuration-in-apache/:0:0","tags":null,"title":"Creating a Default VirtualHost Configuration in Apache","uri":"/posts/development/creating-a-default-virtualhost-configuration-in-apache/#"},{"categories":["Development"],"collections":null,"content":"The provided command is almost correct for sending an email using the sendEmail utility in the terminal on Ubuntu 18.04. However, there are a couple of issues with the formatting. Here\u0026rsquo;s the corrected version of the command in markdown format as you requested: # Send Email using Terminal Bash on Ubuntu 18.04 First, install the necessary packages: ```bash sudo apt install sendEmail libio-socket-ssl-perl libnet-ssleay-perl Then, you can use the sendEmail command to send an email as follows: sendEmail -f no.reply@example.com -o tls=yes -t john@example.com -s smtp.gmail.com:587 -xu \u0026#34;no.reply@example.com\u0026#34; -xp \u0026#34;yourpassword\u0026#34; -u \u0026#34;[example-server] Server has been started\u0026#34; -m \u0026#34;example Server has just been started.\u0026#34; Make sure to replace \u0026quot;no.reply@example.com\u0026quot; and \u0026quot;yourpassword\u0026quot; with your actual Gmail email address and password. Additionally, ensure that you have allowed less secure apps in your Gmail settings for this to work. Remember that using your Gmail password in this way can be a security risk. Consider using an App Password or other secure authentication methods if available to you. Please replace `\u0026#34;no.reply@example.com\u0026#34;` and `\u0026#34;yourpassword\u0026#34;` with your actual Gmail email address and password. Additionally, make sure that you have allowed less secure apps in your Gmail settings for this to work. Be cautious when using your Gmail password in this way, as it can be a security risk. Consider using an App Password or other secure authentication methods if available to you.","date":"21-08-2019","objectID":"/posts/development/send-email-using-teminal-bash-on-ubuntu-18-04/:0:0","tags":null,"title":"Send Email using teminal bash on Ubuntu 18-04","uri":"/posts/development/send-email-using-teminal-bash-on-ubuntu-18-04/#"},{"categories":["Development"],"collections":null,"content":"In this tutorial, we\u0026rsquo;ll walk you through the process of muting a video using FFmpeg, a powerful command-line tool for video and audio processing. There are several ways to mute a video with FFmpeg, depending on your specific needs. We\u0026rsquo;ll cover muting a single video file and muting multiple video files in a folder while preserving metadata, including GPS information. ","date":"20-08-2019","objectID":"/posts/development/how-to-mute-a-video-using-ffmpeg/:0:0","tags":null,"title":"How to Mute a Video Using FFmpeg","uri":"/posts/development/how-to-mute-a-video-using-ffmpeg/#"},{"categories":["Development"],"collections":null,"content":"Mute a Single Video File To mute a single video file, you can use the following FFmpeg command: ffmpeg -i input_video.mp4 -c:v copy -an output_video.mp4 -i input_video.mp4: Specifies the input video file. -c:v copy: Copies the video codec without re-encoding, ensuring no loss in video quality. -an: Disables audio stream in the output, effectively muting the video. output_video.mp4: Specifies the name of the output video file. Replace input_video.mp4 with the name of your input video file and output_video.mp4 with the desired name for the muted video. This command will create a muted version of the video without altering the video quality. ","date":"20-08-2019","objectID":"/posts/development/how-to-mute-a-video-using-ffmpeg/:1:0","tags":null,"title":"How to Mute a Video Using FFmpeg","uri":"/posts/development/how-to-mute-a-video-using-ffmpeg/#mute-a-single-video-file"},{"categories":["Development"],"collections":null,"content":"Mute Multiple Video Files in a Folder If you have multiple video files in a folder that you want to mute while preserving metadata, you can use a shell script along with FFmpeg. Here\u0026rsquo;s an example script: #!/bin/bash # Loop through all video files in the current directory for file in *.MOV; do if [ -f \u0026#34;$file\u0026#34; ]; then # Rename the original file mv \u0026#34;$file\u0026#34; \u0026#34;${file}_original\u0026#34; # Mute the video using FFmpeg and preserve metadata ffmpeg -i \u0026#34;${file}_original\u0026#34; -c:v copy -an \u0026#34;$file\u0026#34; fi done Save this script in a file, e.g., mute_videos.sh, and make it executable with the following command: chmod +x mute_videos.sh Now, you can run the script in the folder containing your video files, and it will mute each video while keeping the metadata intact. ./mute_videos.sh Please note that this script assumes your video files have the .MOV extension. Adjust the script accordingly if your files have a different extension. ","date":"20-08-2019","objectID":"/posts/development/how-to-mute-a-video-using-ffmpeg/:2:0","tags":null,"title":"How to Mute a Video Using FFmpeg","uri":"/posts/development/how-to-mute-a-video-using-ffmpeg/#mute-multiple-video-files-in-a-folder"},{"categories":["Development"],"collections":null,"content":"Additional Note: Preserving Metadata In your original command, you used the -movflags use_metadata_tags and -map_metadata 0 options to preserve metadata. This is important if your videos contain metadata like GPS information. The examples provided in this tutorial also preserve metadata while muting the videos. ","date":"20-08-2019","objectID":"/posts/development/how-to-mute-a-video-using-ffmpeg/:3:0","tags":null,"title":"How to Mute a Video Using FFmpeg","uri":"/posts/development/how-to-mute-a-video-using-ffmpeg/#additional-note-preserving-metadata"},{"categories":["Development"],"collections":null,"content":"Alternative: Muting Videos Using iMovie on iOS If you prefer a graphical user interface, you can mute videos on iOS using iMovie: Open the Photos app on your iOS device. Select the video you want to mute. Tap the \u0026ldquo;Edit\u0026rdquo; button. Choose \u0026ldquo;Edit with iMovie.\u0026rdquo; In iMovie, tap the sound icon (usually located in the upper-left corner) until it becomes a muted sound icon. Tap \u0026ldquo;Done\u0026rdquo; and wait for the export process to complete. The video will be muted. This method is user-friendly and suitable for those who prefer not to use command-line tools. Now you have multiple options for muting videos, whether you prefer using FFmpeg on your computer or iMovie on your iOS device. Choose the method that best suits your needs and workflow. ","date":"20-08-2019","objectID":"/posts/development/how-to-mute-a-video-using-ffmpeg/:4:0","tags":null,"title":"How to Mute a Video Using FFmpeg","uri":"/posts/development/how-to-mute-a-video-using-ffmpeg/#alternative-muting-videos-using-imovie-on-ios"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Here\u0026rsquo;s an example of how you can configure your Apache web server to return a \u0026ldquo;403 Forbidden\u0026rdquo; error if the HTTPS is accessed directly using the IP address. This configuration assumes that you have the mod_ssl module installed and enabled in your Apache server. ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:0:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 1: Create or Edit the Apache Configuration File Open the Apache configuration file in a text editor. The location of the configuration file may vary depending on your operating system and Apache installation. Common locations include: Ubuntu/Debian: /etc/apache2/sites-available/default-ssl.conf CentOS/RHEL: /etc/httpd/conf.d/ssl.conf ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:1:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#step-1-create-or-edit-the-apache-configuration-file"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 2: Add the Configuration Block Inside the \u0026lt;VirtualHost *:443\u0026gt; block, add the following configuration directives: \u0026lt;IfModule mod_ssl.c\u0026gt; # Make sure use default as server name ServerName default DocumentRoot /var/www/html \u0026lt;Directory /var/www/html\u0026gt; Options -Indexes Require all denied \u0026lt;/Directory\u0026gt; SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined \u0026lt;/VirtualHost\u0026gt; The Require all denied directive ensures that all requests to the document root (/var/www/html in this example) are denied. This means that if someone tries to access the website directly using the IP address, they will receive a \u0026ldquo;403 Forbidden\u0026rdquo; error. Make sure to replace /var/www/html with the actual path to your web root directory. ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:2:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#step-2-add-the-configuration-block"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 3: Save and Exit Save the changes to the configuration file and exit the text editor. ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:3:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#step-3-save-and-exit"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 4: Restart Apache Restart the Apache web server to apply the new configuration. The command may vary depending on your operating system. Here are some common commands: Ubuntu/Debian: sudo service apache2 restart CentOS/RHEL: sudo systemctl restart httpd ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:4:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#step-4-restart-apache"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion By following the steps above, you can configure your Apache web server to return a \u0026ldquo;403 Forbidden\u0026rdquo; error when someone tries to access your website directly using the IP address over HTTPS. This adds an extra layer of security and ensures that your website is accessed through the intended domain name or hostname. ","date":"19-08-2019","objectID":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/:5:0","tags":["linux","apache"],"title":"How to Return Forbidden if Apache HTTPS is Accessed Directly Using IP","uri":"/posts/development/how-to-return-forbidden-if-apache-https-is-accessed-directly-using-ip/#conclusion"},{"categories":["Development"],"collections":null,"content":"Tmux is a versatile terminal multiplexer that allows you to manage multiple terminal sessions within a single window. Additionally, you can customize Tmux behavior to suit your preferences and automate tasks. This article presents a collection of Bash scripts designed to customize Tmux behavior and display an SSH banner when starting Tmux. Let\u0026rsquo;s explore each script\u0026rsquo;s purpose and functionality. ","date":"19-08-2019","objectID":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/:0:0","tags":null,"title":"Customizing Tmux Behavior and SSH Banner Display","uri":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/#"},{"categories":["Development"],"collections":null,"content":"Script 1: Show SSH Banner on Start Tmux if [[ -z \u0026#34;$TMUX\u0026#34; ]]; then tmux has-session \u0026amp;\u0026gt; /dev/null if [ $? -eq 1 ]; then shopt -q login_shell \u0026amp;\u0026amp; exec tmux new exit else shopt -q login_shell \u0026amp;\u0026amp; exec tmux attach exit fi else TMUX_WINDOW=$(tmux display-message -p \u0026#39;#{session_windows}\u0026#39;) if [ $TMUX_WINDOW == \u0026#34;1\u0026#34; ]; then shopt -q login_shell \u0026amp;\u0026amp; run-parts /etc/update-motd.d/ fi fi This script is designed to display an SSH banner when starting Tmux. It checks whether the TMUX environment variable is empty, which indicates that Tmux is not currently running. If Tmux is not running and the current shell is a login shell, it starts a new Tmux session. If Tmux is already running, it attaches to the existing session. If there\u0026rsquo;s only one window in the Tmux session, the script runs the /etc/update-motd.d/ script to show a message of the day for SSH sessions. ","date":"19-08-2019","objectID":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/:1:0","tags":null,"title":"Customizing Tmux Behavior and SSH Banner Display","uri":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/#script-1-show-ssh-banner-on-start-tmux"},{"categories":["Development"],"collections":null,"content":"Script 2: Show Only First Window on Tmux Session if [[ -z \u0026#34;$TMUX\u0026#34; ]]; then tmux has-session \u0026amp;\u0026gt; /dev/null if [ $? -eq 1 ]; then exec tmux new exit else exec tmux attach exit fi else TMUX_WINDOW=$(tmux display-message -p \u0026#39;#{session_windows}\u0026#39;) if [ $TMUX_WINDOW == \u0026#34;1\u0026#34; ]; then run-parts /etc/update-motd.d/ fi fi Similar to the first script, this one focuses on Tmux behavior. It checks whether Tmux is running. If it\u0026rsquo;s not, a new Tmux session is started. If Tmux is already running, the script attaches to the existing session. If the Tmux session contains only one window, it executes the /etc/update-motd.d/ script. ","date":"19-08-2019","objectID":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/:2:0","tags":null,"title":"Customizing Tmux Behavior and SSH Banner Display","uri":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/#script-2-show-only-first-window-on-tmux-session"},{"categories":["Development"],"collections":null,"content":"Script 3: Show Every Time Start New Tmux Session if [[ -z \u0026#34;$TMUX\u0026#34; ]]; then tmux has-session \u0026amp;\u0026gt;/dev/null if [ $? -eq 1 ]; then exec tmux new exit else exec tmux attach exit fi else run-parts /etc/update-motd.d/ # Show every time new Tmux session fi This script focuses on executing the /etc/update-motd.d/ script every time a new Tmux session is started, regardless of the number of windows. If Tmux is not running, a new session is started. If Tmux is already running, the script attaches to the existing session and executes the update script. ","date":"19-08-2019","objectID":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/:3:0","tags":null,"title":"Customizing Tmux Behavior and SSH Banner Display","uri":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/#script-3-show-every-time-start-new-tmux-session"},{"categories":["Development"],"collections":null,"content":"Script 4: Pause Before Executing Tmux if [[ -z \u0026#34;$TMUX\u0026#34; ]]; then read -n 1 -s -r -p \u0026#34;Press any key or wait 10 seconds to continue...\u0026#34; -t 10 tmux has-session \u0026amp;\u0026gt;/dev/null if [ $? -eq 1 ]; then exec tmux new exit else exec tmux attach exit fi fi This script adds an interactive feature by pausing before executing Tmux. It prompts the user to press any key or wait for 10 seconds. After the pause, the script checks if Tmux is running. If not, a new Tmux session is started. If Tmux is already running, the script attaches to the existing session. ","date":"19-08-2019","objectID":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/:4:0","tags":null,"title":"Customizing Tmux Behavior and SSH Banner Display","uri":"/posts/development/customizing-tmux-behavior-and-ssh-banner-display/#script-4-pause-before-executing-tmux"},{"categories":["DevOps"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with the Uncomplicated Firewall (UFW) not starting on boot in Ubuntu 18.04, there are a few steps you can take to troubleshoot and resolve the problem. Here\u0026rsquo;s a step-by-step guide to fix the issue. Please note: Modifying system files requires administrative privileges. Make sure you have the necessary permissions before proceeding. ","date":"18-08-2019","objectID":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/:0:0","tags":["linux"],"title":"UFW Not Starting on Boot in Ubuntu 18.04","uri":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/#"},{"categories":["DevOps"],"collections":null,"content":"Editing the UFW Service File Open a terminal or SSH session and log in to your Ubuntu 18.04 system. Use the following command to open the UFW service file in the Vim text editor: sudo vim /lib/systemd/system/ufw.service Inside the Vim editor, locate the line that reads Before=network.target. You need to remove this line from the file. To do so, move the cursor to that line, press dd to delete the line, and then save the file and exit Vim by typing :wq. Next, you need to add the line After=network.target to the UFW service file. To do this, move the cursor to an empty line after the [Service] section, press o to start a new line, and then type After=network.target. Save the file and exit Vim by typing :wq. ","date":"18-08-2019","objectID":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/:1:0","tags":["linux"],"title":"UFW Not Starting on Boot in Ubuntu 18.04","uri":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/#editing-the-ufw-service-file"},{"categories":["DevOps"],"collections":null,"content":"Enabling the UFW Service After modifying the UFW service file, you need to reload the systemd daemon configuration to ensure the changes take effect. Run the following command: sudo systemctl daemon-reload Now, enable the UFW service to start on boot by running the following command: sudo systemctl enable ufw Finally, reboot your system: sudo reboot After the system restarts, UFW should be started automatically during the boot process. ","date":"18-08-2019","objectID":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/:2:0","tags":["linux"],"title":"UFW Not Starting on Boot in Ubuntu 18.04","uri":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/#enabling-the-ufw-service"},{"categories":["DevOps"],"collections":null,"content":"Conclusion By removing the Before=network.target line and adding the After=network.target line in the UFW service file, you can ensure that UFW starts on boot in Ubuntu 18.04. Remember to reload the systemd daemon configuration and enable the UFW service before rebooting the system. If you encounter any issues or the problem persists, please consult the official Ubuntu documentation or seek assistance from the Ubuntu community forums for further support. ","date":"18-08-2019","objectID":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/:3:0","tags":["linux"],"title":"UFW Not Starting on Boot in Ubuntu 18.04","uri":"/posts/devops/ufw-not-starting-on-boot-in-ubuntu-18-04/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Windows, you can filter events in the event logs based on specific criteria, such as the user account associated with an event. This can be particularly useful for security and auditing purposes. Below are examples of XML queries that filter Windows events by user account using XPath expressions. ","date":"04-08-2019","objectID":"/posts/development/filtering-windows-events-by-user-account/:0:0","tags":null,"title":"Filtering Windows Events by User Account","uri":"/posts/development/filtering-windows-events-by-user-account/#"},{"categories":["Development"],"collections":null,"content":"Example 1: Filter Successful Logon Events (Event ID 4624) by User Account This example demonstrates how to filter successful logon events (Event ID 4624) in the Security event log for a specific user account, in this case, \u0026ldquo;john.doe.\u0026rdquo; \u0026lt;QueryList\u0026gt; \u0026lt;Query Id=\u0026#34;0\u0026#34; Path=\u0026#34;Security\u0026#34;\u0026gt; \u0026lt;Select Path=\u0026#34;Security\u0026#34;\u0026gt; *[ EventData[Data[@Name=\u0026#39;LogonType\u0026#39;]=\u0026#39;2\u0026#39;] and EventData[Data[@Name=\u0026#39;TargetUserName\u0026#39;]=\u0026#39;john.doe\u0026#39;] and System[(EventID=\u0026#39;4624\u0026#39;)] ] \u0026lt;/Select\u0026gt; \u0026lt;/Query\u0026gt; \u0026lt;/QueryList\u0026gt; In this query: Path=\u0026quot;Security\u0026quot; specifies that we are searching in the Security event log. EventData[Data[@Name='LogonType']='2'] filters events where the Logon Type is \u0026lsquo;2\u0026rsquo;, which typically represents an interactive logon (e.g., via the console or Remote Desktop). EventData[Data[@Name='TargetUserName']='john.doe'] filters events where the TargetUserName is \u0026lsquo;john.doe.\u0026rsquo; System[(EventID='4624')] further filters events to include only those with Event ID 4624, which corresponds to a successful logon event. ","date":"04-08-2019","objectID":"/posts/development/filtering-windows-events-by-user-account/:1:0","tags":null,"title":"Filtering Windows Events by User Account","uri":"/posts/development/filtering-windows-events-by-user-account/#example-1-filter-successful-logon-events-event-id-4624-by-user-account"},{"categories":["Development"],"collections":null,"content":"Example 2: Filter Events by User Account This example demonstrates how to filter events in the Security event log for any occurrence of a specific user account, \u0026ldquo;john.doe.\u0026rdquo; \u0026lt;QueryList\u0026gt; \u0026lt;Query Id=\u0026#34;0\u0026#34; Path=\u0026#34;Security\u0026#34;\u0026gt; \u0026lt;Select Path=\u0026#34;Security\u0026#34;\u0026gt; *[ EventData[Data[@Name=\u0026#39;TargetUserName\u0026#39;]=\u0026#39;john.doe\u0026#39;] ] \u0026lt;/Select\u0026gt; \u0026lt;/Query\u0026gt; \u0026lt;/QueryList\u0026gt; In this query: Path=\u0026quot;Security\u0026quot; specifies that we are searching in the Security event log. EventData[Data[@Name='TargetUserName']='john.doe'] filters events where the TargetUserName is \u0026lsquo;john.doe,\u0026rsquo; irrespective of the event type. You can modify these queries by replacing \u0026ldquo;john.doe\u0026rdquo; with the specific username you want to filter events for. These queries can be used with tools like Windows Event Viewer or PowerShell to search and analyze Windows event logs based on user account criteria. ","date":"04-08-2019","objectID":"/posts/development/filtering-windows-events-by-user-account/:2:0","tags":null,"title":"Filtering Windows Events by User Account","uri":"/posts/development/filtering-windows-events-by-user-account/#example-2-filter-events-by-user-account"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re experiencing issues with SSHD on Windows 10 not accepting your public key, there are a few steps you can take to troubleshoot the problem. Follow the steps below to resolve the issue. ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:0:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 1: Open the SSHD Configuration File To begin, open the SSHD configuration file using a text editor. In this example, we\u0026rsquo;ll use the Bash shell and the vim editor. Open a command prompt and navigate to the following directory: C:\\ProgramData\\ssh Then, run the following command to open the SSHD configuration file: bash -c \u0026#39;vim sshd_config\u0026#39; ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:1:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#step-1-open-the-sshd-configuration-file"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 2: Verify the AuthorizedKeysFile Path In the SSHD configuration file, look for the section that starts with Match Group administrators. Within this section, there should be a line specifying the path to the authorized keys file. By default, it should be set to: AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys Ensure that this line is not commented out (i.e., not preceded by a #), as it needs to be active for SSHD to recognize the authorized keys. ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:2:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#step-2-verify-the-authorizedkeysfile-path"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 3: Check File Permissions Next, verify the file permissions of the authorized keys file. By default, the file should be located at: C:\\ProgramData\\ssh\\administrators_authorized_keys Right-click on the file, select \u0026ldquo;Properties,\u0026rdquo; and navigate to the \u0026ldquo;Security\u0026rdquo; tab. Make sure that the user account you are connecting with has appropriate permissions to read the file. ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:3:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#step-3-check-file-permissions"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 4: Restart the SSHD Service After making any changes to the SSHD configuration file or file permissions, you need to restart the SSHD service for the changes to take effect. Open a command prompt with administrative privileges and run the following command: net stop sshd net start sshd This will stop and start the SSHD service, allowing the changes to be applied. ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:4:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#step-4-restart-the-sshd-service"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 5: Attempt SSH Connection Once the SSHD service is restarted, try connecting to your Windows 10 machine using the SSH key. If the authorized keys file and permissions are correctly set, the public key should be accepted, and you should be able to log in successfully. If you\u0026rsquo;re still experiencing issues, double-check the SSHD configuration file for any typos or mistakes. Additionally, ensure that the public key you\u0026rsquo;re using is correctly formatted and matches the corresponding private key. By following these steps, you should be able to resolve the issue of SSHD on Windows 10 not accepting your public key. ","date":"17-03-2019","objectID":"/posts/devops/sshd-windows-10-not-accepting-public-key-/:5:0","tags":["windows","ssh"],"title":"SSHD Windows 10 Not Accepting Public Key","uri":"/posts/devops/sshd-windows-10-not-accepting-public-key-/#step-5-attempt-ssh-connection"},{"categories":["Development"],"collections":null,"content":"To share the Docker Daemon running on a Windows machine with other devices on your local network, you can use the netsh command to perform port forwarding. This will allow you to access the Docker Daemon remotely using the machine\u0026rsquo;s IP address and the forwarded port. Here\u0026rsquo;s how you can do it: Open Command Prompt as Administrator: To execute the netsh command, you need administrative privileges. Right-click on the Command Prompt and choose \u0026ldquo;Run as administrator.\u0026rdquo; Enable Docker Daemon Port: By default, the Docker Daemon listens on localhost (127.0.0.1) for security reasons. To enable it to listen on all interfaces, including your local network, you\u0026rsquo;ll need to modify the Docker Daemon configuration. Locate the daemon.json file located at C:\\ProgramData\\Docker\\config\\daemon.json and add the following configuration: Save the file and restart the Docker service for the changes to take effect. Add Port Proxy Rule: Use the netsh command to set up port forwarding from a specific IP address and port to the Docker Daemon\u0026rsquo;s IP address and port. In this example, we\u0026rsquo;ll forward port 2375 from the machine\u0026rsquo;s IP address 192.168.100.7 to localhost:2375 where the Docker Daemon is running. netsh interface portproxy add v4tov4 listenport=2375 connectaddress=127.0.0.1 connectport=2375 listenaddress=192.168.100.7 protocol=tcp This command forwards incoming traffic on port 2375 of the specified IP address (192.168.100.7) to the Docker Daemon running at 127.0.0.1:2375. Access Docker Daemon Remotely: Now, you should be able to access the Docker Daemon from other devices on your local network by using the IP address of the Windows machine (192.168.100.7 in this case) and port 2375. For example, you can use the following command to check if the Docker Daemon is reachable: docker -H tcp://192.168.100.7:2375 info Replace 192.168.100.7 with the actual IP address of your Windows machine. Remember that exposing Docker Daemon to the network without proper security measures can pose a security risk. Ensure that your network and Docker Daemon are properly secured and consider using TLS for encrypted communication if you plan to access the Docker Daemon over the network. ","date":"17-03-2019","objectID":"/posts/development/sharing-docker-daemon-from-windows-to-local-network/:0:0","tags":null,"title":"Sharing Docker Daemon from Windows to Local Network","uri":"/posts/development/sharing-docker-daemon-from-windows-to-local-network/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re experiencing difficulties connecting to an L2TP VPN (Layer 2 Tunneling Protocol) on Windows 10, there are a few troubleshooting steps you can follow. This guide will walk you through the process of adding a registry key and ensuring the necessary protocols are allowed to establish a successful L2TP VPN connection. ","date":"13-03-2019","objectID":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/:0:0","tags":["windows","vpn"],"title":"Troubleshooting L2TP VPN Connection Issue on Windows 10","uri":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Adding the Registry Key To begin, you\u0026rsquo;ll need to add a registry key that can help resolve L2TP VPN connectivity issues. Follow the steps below: Press Windows + R on your keyboard to open the Run dialog box. Type regedit and hit Enter to open the Registry Editor. In the left-hand pane of the Registry Editor, navigate to the following key: HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\services\\PolicyAgent Right-click on the PolicyAgent folder, select New, and then choose DWORD (32-bit) Value. Rename the newly created DWORD value to AssumeUDPEncapsulationContextOnSendRule. Double-click on the AssumeUDPEncapsulationContextOnSendRule value and set its Value data to 2. Click OK to save the changes. ","date":"13-03-2019","objectID":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/:1:0","tags":["windows","vpn"],"title":"Troubleshooting L2TP VPN Connection Issue on Windows 10","uri":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/#adding-the-registry-key"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Allowing the Necessary Protocols In addition to the registry key, it\u0026rsquo;s important to ensure that the required protocols are allowed in order to establish an L2TP VPN connection on Windows 10. Here\u0026rsquo;s how you can check and enable the necessary protocols: Press Windows + X on your keyboard to open the Power User menu, then select Network Connections from the list. In the Network Connections window, locate the VPN connection you\u0026rsquo;re trying to establish. Right-click on the VPN connection and choose Properties. In the Properties window, select the Security tab. Under the Security tab, click on the Allow these protocols checkbox. Make sure the following protocols are selected: Challenge Handshake Authentication Protocol (CHAP) Microsoft CHAP version 2 (MS-CHAP v2) Click OK to save the changes. Once you have completed these steps, try connecting to your L2TP VPN again. The changes you made to the registry key and the enabled protocols should help resolve the connectivity issues on Windows 10. Remember to restart your computer after making any registry modifications to ensure the changes take effect properly. Hopefully, this guide has helped you troubleshoot and resolve your L2TP VPN connection issue on Windows 10. If the problem persists, you may want to contact your VPN provider for further assistance or consult Windows support resources for additional troubleshooting steps. ","date":"13-03-2019","objectID":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/:2:0","tags":["windows","vpn"],"title":"Troubleshooting L2TP VPN Connection Issue on Windows 10","uri":"/posts/software/troubleshooting-l2tp-vpn-connection-issue-on-windows-10/#allowing-the-necessary-protocols"},{"categories":["Development"],"collections":null,"content":"In Ubuntu 18.04, the root account is typically disabled for security reasons. However, there might be situations where you need to enable root login temporarily. Please note that enabling root login should be done with caution, and it\u0026rsquo;s recommended to only do so when absolutely necessary. Here\u0026rsquo;s a step-by-step guide on how to allow root login in Ubuntu 18.04: Step 1: Set the Root Password Before enabling root login, you need to set a password for the root account. Open a terminal and run the following command: sudo passwd You\u0026rsquo;ll be prompted to enter a new password for the root account. After entering and confirming the password, the root account will have a password associated with it. Step 2: Edit the SSH Configuration File Now, you need to edit the SSH configuration file to allow root login. You can use a text editor like nano or vim for this purpose. Here, we\u0026rsquo;ll use nano: sudo nano /etc/ssh/sshd_config Look for the line that says PermitRootLogin prohibit-password within the sshd_config file. It might be commented out with a \u0026lsquo;#\u0026rsquo; at the beginning of the line. Change it to: PermitRootLogin yes Save your changes and exit the text editor. Step 3: Restart the SSH Service To apply the changes, you\u0026rsquo;ll need to restart the SSH service: sudo service ssh restart That\u0026rsquo;s it! You\u0026rsquo;ve allowed root login in Ubuntu 18.04. However, remember that enabling root login can pose security risks, so it\u0026rsquo;s essential to use it judiciously and disable it once you\u0026rsquo;ve finished the task that required it. Important Security Note: After you\u0026rsquo;ve completed your tasks that required root access, it\u0026rsquo;s highly recommended that you disable root login again for security reasons. You can do this by reversing the changes you made in sshd_config, setting PermitRootLogin back to prohibit-password, and then restarting the SSH service. Additionally, it\u0026rsquo;s a good practice to use the sudo command with your regular user account for administrative tasks rather than relying on the root account. ","date":"10-03-2019","objectID":"/posts/development/how-to-allow-root-login-in-ubuntu-1804/:0:0","tags":null,"title":"How to Allow Root Login in Ubuntu 18.04","uri":"/posts/development/how-to-allow-root-login-in-ubuntu-1804/#"},{"categories":["Development"],"collections":null,"content":"rsync is a powerful and widely used command-line utility in Unix-like operating systems that facilitates efficient and reliable file synchronization and data transfer between directories or across different machines. It is particularly useful for remote backups, mirroring, and incremental transfers. The name \u0026ldquo;rsync\u0026rdquo; stands for \u0026ldquo;remote synchronization.\u0026rdquo; ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:0:0","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#"},{"categories":["Development"],"collections":null,"content":"Basic Syncing ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:1:0","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#basic-syncing"},{"categories":["Development"],"collections":null,"content":"Syncing Folder src into dest To synchronize the contents of a local folder src into another local folder dest, you can use the following command: rsync --progress -avz ./src /dest --progress: Displays the progress of the synchronization. -a: Archive mode, which preserves various file attributes and ensures recursive copying. -v: Verbose mode, providing more detailed output. -z: Enables compression during data transfer to reduce bandwidth usage. ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:1:1","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#syncing-folder-src-into-dest"},{"categories":["Development"],"collections":null,"content":"Syncing with Custom Port You can specify a custom SSH port while using rsync to sync files over SSH. For example: rsync -e \u0026#34;ssh -p $portNumber\u0026#34; --progress -avz ./src /dest Replace $portNumber with the actual port number you want to use. This command establishes an SSH connection with the specified port for secure data transfer. ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:1:2","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#syncing-with-custom-port"},{"categories":["Development"],"collections":null,"content":"Syncing the Content of src into dest To synchronize the content of the source directory src into the destination directory dest (so that the contents of src are directly placed inside dest), you can use the following command: rsync --progress -avz ./src/ /dest ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:1:3","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#syncing-the-content-of-src-into-dest"},{"categories":["Development"],"collections":null,"content":"Incremental Sync Incremental syncing is useful for transferring only the changed parts of files or new files, minimizing the amount of data transferred. Here\u0026rsquo;s a command for incremental syncing: rsync -abPv --backup-dir=old_`date +%F-%T` --delete --exclude=old_* ./source ./destination Explanation of options used: -a: Archive mode, includes recursive copying and preserves attributes. -b: Creates backup copies of files that are being replaced or deleted. -P: Combines --progress and --partial for better handling of interrupted transfers. -v: Verbose mode for detailed output. --backup-dir: Specifies a directory where backup copies of replaced/deleted files are stored. --delete: Deletes extraneous files from the destination that don\u0026rsquo;t exist in the source. --exclude: Excludes files or patterns from the sync process. The date +%F-%T command generates a timestamp in the format YYYY-MM-DD-HH:MM:SS, which is used to create a timestamped backup directory. Overall, rsync is a versatile tool for efficient and reliable file synchronization, making it an essential utility for managing and transferring data across systems. ","date":"05-03-2019","objectID":"/posts/development/introduction-to-rsync/:2:0","tags":null,"title":"Introduction to rsync","uri":"/posts/development/introduction-to-rsync/#incremental-sync"},{"categories":["Development"],"collections":null,"content":"In Linux, the tar command is commonly used for archiving and extracting files and directories. Tar archives can be compressed to save space using various compression algorithms like gzip (*.tar.gz) or bzip2 (*.tar.bz2). This guide will cover how to create, extract, and work with tar compressed archives in Linux. ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:0:0","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#"},{"categories":["Development"],"collections":null,"content":"Compressing Files and Directories with Tar ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:1:0","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#compressing-files-and-directories-with-tar"},{"categories":["Development"],"collections":null,"content":"1. Create a Tar Archive To create a tar archive of a folder, use the tar -cvpf command: tar -cvpf file.tar folderToCompress ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:1:1","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#1-create-a-tar-archive"},{"categories":["Development"],"collections":null,"content":"2. Create a Gzipped Tar Archive To create a gzipped tar archive (commonly referred to as a .tar.gz file), use the -z option along with tar -czvpf: tar -czvpf file.tar.gz folderToCompress ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:1:2","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#2-create-a-gzipped-tar-archive"},{"categories":["Development"],"collections":null,"content":"3. Create a Gzipped Tar Archive Without Parent Directory To create a gzipped tar archive without including the parent directory in the archive, use the -C option: tar -C folderPath -czvpf file.tar.gz selectedDir ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:1:3","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#3-create-a-gzipped-tar-archive-without-parent-directory"},{"categories":["Development"],"collections":null,"content":"Extracting Tar Archives ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:2:0","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#extracting-tar-archives"},{"categories":["Development"],"collections":null,"content":"1. Extract a Tar Archive To extract the contents of a tar archive, use the tar -xvpf command: tar -xvpf file.tar ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:2:1","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#1-extract-a-tar-archive"},{"categories":["Development"],"collections":null,"content":"2. Extract a Gzipped Tar Archive To extract the contents of a gzipped tar archive, use tar -xzvpf: tar -xzvpf file.tar.gz ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:2:2","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#2-extract-a-gzipped-tar-archive"},{"categories":["Development"],"collections":null,"content":"3. Extract with Specific Folder To extract the contents of a gzipped tar archive into a specific folder, use the -C option: tar -C ~/example.com -xzvpf file.tgz docker-composes/example.com ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:2:3","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#3-extract-with-specific-folder"},{"categories":["Development"],"collections":null,"content":"Listing Contents of a Tar Archive To list the contents of a tar archive, use the tar -tvf command: tar -tvf file.tar.gz These commands should cover most of your needs for creating, extracting, and listing the contents of tar compressed archives in Linux. Make sure to replace file.tar, folderToCompress, folderPath, selectedDir, and other placeholders with your actual file and directory names as needed. ","date":"05-03-2019","objectID":"/posts/development/working-with-tar-compressed-archives-in-linux/:3:0","tags":null,"title":"Working with Tar Compressed Archives in Linux","uri":"/posts/development/working-with-tar-compressed-archives-in-linux/#listing-contents-of-a-tar-archive"},{"categories":["Development"],"collections":null,"content":"The provided command is a shell script written in Bash that iterates through directories in the current directory, checks if an archive file (.tar.gz) already exists for each directory, and if not, it creates a compressed archive of the directory using the tar command. Here\u0026rsquo;s an explanation of what the script does: for dir in `find . -maxdepth 1 -type d | grep -v \u0026#34;^\\.$\u0026#34; `; do if ! [ -e ${dir}.tar.gz ] ; then tar -cvzf ${dir}.tar.gz ${dir} fi done for dir in find . -maxdepth 1 -type d | grep -v \u0026ldquo;^.$\u0026rdquo; ; do: This line starts a loop that iterates through all subdirectories (excluding the current directory) in the current directory. if ! [ -e ${dir}.tar.gz ] ; then: This line checks if an archive file with the name ${dir}.tar.gz does not exist in the current directory. Inside the if block, tar -cvzf ${dir}.tar.gz ${dir} creates a compressed archive of the current directory (${dir}) with the name ${dir}.tar.gz. Here\u0026rsquo;s what the options used with the tar command do: -c: Create a new archive. -v: Verbosely list the files processed. -z: Compress the archive using gzip. -f: Specifies the archive file\u0026rsquo;s name. The done keyword marks the end of the loop. So, this script essentially compresses each subdirectory in the current directory into a .tar.gz archive file if such an archive does not already exist for that subdirectory. If you have any specific questions or need further assistance with this script, please let me know. ","date":"03-03-2019","objectID":"/posts/development/compress-each-folder-with-one-archive-if-archive-is-not-exist/:0:0","tags":null,"title":"Compress Each Folder with one archive if archive is not exist","uri":"/posts/development/compress-each-folder-with-one-archive-if-archive-is-not-exist/#"},{"categories":["Development"],"collections":null,"content":"The provided Bash commands are useful for listing and deleting files, including dotfiles, except for the newest ones. Here\u0026rsquo;s a breakdown of what each command does: ","date":"03-03-2019","objectID":"/posts/development/delete-except-newest-files-bash/:0:0","tags":null,"title":"Delete Except Newest Files Bash","uri":"/posts/development/delete-except-newest-files-bash/#"},{"categories":["Development"],"collections":null,"content":"List all files except the newest three: ls -t | tail -n +4 ls -t: Lists all files in the current directory sorted by modification time in descending order (newest first). tail -n +4: Displays all lines starting from the fourth line. In this context, it shows all files except the newest three. ","date":"03-03-2019","objectID":"/posts/development/delete-except-newest-files-bash/:0:1","tags":null,"title":"Delete Except Newest Files Bash","uri":"/posts/development/delete-except-newest-files-bash/#list-all-files-except-the-newest-three"},{"categories":["Development"],"collections":null,"content":"Delete those files: ls -t | tail -n +4 | xargs rm -- xargs: Takes the output from the previous command (the list of files to delete) and passes them as arguments to the rm (remove) command. rm --: Deletes the specified files. ","date":"03-03-2019","objectID":"/posts/development/delete-except-newest-files-bash/:0:2","tags":null,"title":"Delete Except Newest Files Bash","uri":"/posts/development/delete-except-newest-files-bash/#delete-those-files"},{"categories":["Development"],"collections":null,"content":"List all dotfiles except the newest three: ls -At | tail -n +4 ls -At: Lists all files, including dotfiles, sorted by modification time in descending order. tail -n +4: Displays all lines starting from the fourth line. It shows all dotfiles except the newest three. ","date":"03-03-2019","objectID":"/posts/development/delete-except-newest-files-bash/:0:3","tags":null,"title":"Delete Except Newest Files Bash","uri":"/posts/development/delete-except-newest-files-bash/#list-all-dotfiles-except-the-newest-three"},{"categories":["Development"],"collections":null,"content":"Delete dotfiles: ls -At | tail -n +4 | xargs rm -- xargs: Takes the output from the previous command (the list of dotfiles to delete) and passes them as arguments to the rm (remove) command. rm --: Deletes the specified dotfiles. It\u0026rsquo;s important to use these commands with caution, especially the ones that involve deletion (xargs rm --). Deleting files is irreversible, and there\u0026rsquo;s no easy way to recover them if you make a mistake. Make sure you\u0026rsquo;re in the correct directory and that you are certain about the files you want to delete before running these commands. Additionally, always have backups of important data before performing mass file deletions to avoid accidental data loss. ","date":"03-03-2019","objectID":"/posts/development/delete-except-newest-files-bash/:0:4","tags":null,"title":"Delete Except Newest Files Bash","uri":"/posts/development/delete-except-newest-files-bash/#delete-dotfiles"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"If you are unable to find the \u0026ldquo;sudo tmutil disablelocal\u0026rdquo; command to remove all local copies of Time Machine snapshots, you can manually delete them using the following steps. ","date":"02-02-2019","objectID":"/posts/software/removing-local-time-machine-snapshots/:0:0","tags":["mac"],"title":"Removing Local Time Machine Snapshots","uri":"/posts/software/removing-local-time-machine-snapshots/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 1: List Local Snapshot Dates To begin, open the Terminal application on your Mac and enter the following command: tmutil listlocalsnapshotdates / | grep 2019 | while read f; do tmutil deletelocalsnapshots $f; doneIn this command, \u0026ldquo;grep 2019\u0026rdquo; can be replaced with any text you want to use as a filter to limit the snapshots to be removed. This helps ensure that only specific snapshots are targeted for deletion. ","date":"02-02-2019","objectID":"/posts/software/removing-local-time-machine-snapshots/:1:0","tags":["mac"],"title":"Removing Local Time Machine Snapshots","uri":"/posts/software/removing-local-time-machine-snapshots/#step-1-list-local-snapshot-dates"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Explanation of the Command Let\u0026rsquo;s break down the command to understand what each part does: tmutil listlocalsnapshotdates /: This lists all the local snapshot dates on your system. grep 2019: This filters the snapshot dates and only selects the ones that contain the text \u0026ldquo;2019\u0026rdquo; (or any text you specify). You can modify this part based on your requirements. while read f; do tmutil deletelocalsnapshots $f; done: This iterates over each snapshot date and deletes the corresponding local snapshot using the tmutil deletelocalsnapshots command. ","date":"02-02-2019","objectID":"/posts/software/removing-local-time-machine-snapshots/:2:0","tags":["mac"],"title":"Removing Local Time Machine Snapshots","uri":"/posts/software/removing-local-time-machine-snapshots/#explanation-of-the-command"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Important Notes This process only removes local copies of Time Machine snapshots and does not affect any backups stored on external drives or network locations. Please exercise caution when using Terminal commands. Make sure you enter the commands correctly to avoid any unintended consequences. Before proceeding with any modifications, it is advisable to have a backup of your important data. I hope this guide helps you remove the local Time Machine snapshots successfully. If you have any further questions, feel free to ask. ","date":"02-02-2019","objectID":"/posts/software/removing-local-time-machine-snapshots/:3:0","tags":["mac"],"title":"Removing Local Time Machine Snapshots","uri":"/posts/software/removing-local-time-machine-snapshots/#important-notes"},{"categories":["Development"],"collections":null,"content":"Avahi is an open-source implementation of zero-configuration networking, also known as Bonjour or mDNS, which allows devices to automatically discover and communicate with each other on a local network without requiring any manual configuration. This guide will walk you through the process of setting up avahi-daemon on Ubuntu, enabling you to reach the hostname ubuntu.local from the host OS, Samba, and through network discovery. Step 1: Install Avahi Packages Open a terminal and execute the following command to install the required Avahi packages: sudo apt-get install avahi-daemon avahi-discover avahi-utils libnss-mdns mdns-scan Step 2: Configure Avahi Daemon Open the Avahi daemon configuration file using a text editor of your choice. In this example, we\u0026rsquo;ll use the Nano editor: sudo nano /etc/avahi/avahi-daemon.conf Locate the line that begins with #domain-name and uncomment it by removing the # at the beginning of the line. Set the domain name to .local: domain-name=.local Save the changes and exit the text editor (in Nano, press Ctrl + O to save and Ctrl + X to exit). Step 3: Restart Avahi Daemon After making the configuration changes, you\u0026rsquo;ll need to restart the Avahi daemon for the changes to take effect: sudo service avahi-daemon restart Step 4: Verify Avahi Setup You can now verify that Avahi is working by using mdns-scan or avahi-discover: To use mdns-scan, run: mdns-scan ubuntu.local To use avahi-discover, simply open it from the applications menu or run: avahi-discover You should see your ubuntu.local hostname listed, along with its IP address, indicating that Avahi is successfully resolving the hostname on the local network. Step 5: Accessing via Samba If you\u0026rsquo;re using Samba for file sharing, you should now be able to access your Ubuntu machine using the ubuntu.local hostname from other devices on the network. Remember that Avahi only works within the local network. Devices outside your local network won\u0026rsquo;t be able to resolve the .local hostname. Conclusion By setting up avahi-daemon on Ubuntu, you\u0026rsquo;ve enabled hostname resolution via mDNS. This allows you to conveniently access your Ubuntu machine using the ubuntu.local hostname from the host OS, Samba, and through network discovery. This can be especially useful in local network environments where manual IP configuration might be cumbersome. ","date":"31-01-2019","objectID":"/posts/development/setting-up-avahi-daemon-on-ubuntu-for-hostname-resolution/:0:0","tags":null,"title":"Setting up avahi-daemon on Ubuntu for hostname resolution","uri":"/posts/development/setting-up-avahi-daemon-on-ubuntu-for-hostname-resolution/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"The ATIV Smart PC Pro is a versatile device that combines the functionality of a tablet and a laptop. However, like any electronic device, it can experience issues from time to time. In this article, we will discuss three common problems that users may encounter with the ATIV Smart PC Pro and provide step-by-step instructions to resolve them. We will cover the following issues: mouse double tap not working, brightness control not functioning properly, and difficulties in uninstalling the Samsung Update software. ","date":"29-08-2018","objectID":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/:0:0","tags":["windows"],"title":"Troubleshooting Common Issues with the ATIV Smart PC Pro","uri":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Mouse Double Tap Not Working If you are experiencing issues with the double tap functionality of your ATIV Smart PC Pro\u0026rsquo;s mouse, you can try reinstalling the mouse drivers. Here\u0026rsquo;s how: Open the Start menu and type \u0026ldquo;Device Manager\u0026rdquo; to launch the Device Manager. Locate the \u0026ldquo;Mice and other pointing devices\u0026rdquo; category and expand it. Right-click on the mouse driver and select \u0026ldquo;Uninstall device.\u0026rdquo; Restart your computer. Windows will automatically reinstall the mouse drivers. ","date":"29-08-2018","objectID":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/:1:0","tags":["windows"],"title":"Troubleshooting Common Issues with the ATIV Smart PC Pro","uri":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/#mouse-double-tap-not-working"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Brightness Control Not Working If you find that the brightness control on your ATIV Smart PC Pro is not functioning properly, follow these steps to resolve the issue: Open the Start menu and type \u0026ldquo;Device Manager\u0026rdquo; to open the Device Manager. Expand the \u0026ldquo;Monitors\u0026rdquo; category. Right-click on the monitor driver and select \u0026ldquo;Uninstall device.\u0026rdquo; Restart your computer. Windows will automatically detect and install the appropriate Plug and Play (PnP) monitor driver. Additionally, ensure that you have the correct Intel 4000 graphics driver provided by Samsung installed. You can download it from the official Samsung support website. ","date":"29-08-2018","objectID":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/:2:0","tags":["windows"],"title":"Troubleshooting Common Issues with the ATIV Smart PC Pro","uri":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/#brightness-control-not-working"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Uninstalling Samsung Update Software If you\u0026rsquo;re having trouble uninstalling the Samsung Update software from your ATIV Smart PC Pro, you can use the following steps to remove it manually: Open a PowerShell window with administrator privileges. To do this, right-click on the Start menu, select \u0026ldquo;Windows PowerShell (Admin)\u0026rdquo;. In the PowerShell window, type the following command and press Enter: Get-WmiObject -Class Win32_Product | Where-Object {$_.Name -like '*Samsung Update*'} | Select-Object -Property IdentifyingNumber. Note down the displayed \u0026ldquo;IdentifyingNumber\u0026rdquo; of the Samsung Update software. Open the Registry Editor by pressing Windows key + R, typing \u0026ldquo;regedit,\u0026rdquo; and hitting Enter. In the Registry Editor, navigate to HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall. Search for the registry key that matches the IdentifyingNumber you noted down earlier. Right-click on the matching registry key and select \u0026ldquo;Delete\u0026rdquo; to remove it. Close the Registry Editor and restart your computer. ","date":"29-08-2018","objectID":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/:3:0","tags":["windows"],"title":"Troubleshooting Common Issues with the ATIV Smart PC Pro","uri":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/#uninstalling-samsung-update-software"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Conclusion By following the troubleshooting steps provided above, you should be able to resolve common issues with your ATIV Smart PC Pro. If you encounter any further problems, it is recommended to consult the Samsung support website or contact their customer support for further assistance. ","date":"29-08-2018","objectID":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/:4:0","tags":["windows"],"title":"Troubleshooting Common Issues with the ATIV Smart PC Pro","uri":"/posts/software/troubleshooting-common-issues-with-the-ativ-smart-pc-pro/#conclusion"},{"categories":["Productivity"],"collections":null,"content":"In Software engineering, requirements gathering has multiple techniques. User stories are one of the most popular techniques in agile development. It\u0026rsquo;s a way to document stakeholders\u0026rsquo; requirements in an informal manner. The primary focus of a user story is talking about requirements value rather than writing a detailed specification of each functionality. Basically, a user story is a short statement mentioning the potential value that a specific stakeholder believes he/she would achieve from the solution/system. In addition, user stories are always complemented with Acceptance Criteria. Those criteria verify that the proposed designed solution is meeting the stakeholders\u0026rsquo; objectives. This article is describing the components of a user story as well as giving tips to write a good user story. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:0:0","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#"},{"categories":["Productivity"],"collections":null,"content":"What is a User Story? In a nutshell, a user story is a statement describing a feature from the perspective of a specific user of the system. Actually, user stories have the following template: As a \u0026lt;system user – persona\u0026gt;, I want to \u0026lt;goal – feature\u0026gt;, so that I `\u0026lt;reason – business value\u0026gt; Example: As an anonymous user, I want to create an account on xyz platform, so that I can register as a contributor or subscriber. Definitely, handing a user story like the above to the dev team is not explanatory enough. It\u0026rsquo;s missing the most important part of a user story which is**, Acceptance Criteria**, in addition to Edge Cases. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:1:0","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#what-is-a-user-story"},{"categories":["Productivity"],"collections":null,"content":"What are User Acceptance Criteria? Acceptance Criteria is a set of conditions that need to be met, in order to fulfill the user goal. This is where the product owners collaborate with the developers and the QA team to brainstorm on the different conditions. Acceptance Criteria should be covering both perspectives; business and technical. Example: As an anonymous user, I want to learn about xyz platform. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:2:0","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#what-are-user-acceptance-criteria"},{"categories":["Productivity"],"collections":null,"content":"Tips for Writing Good User Stories ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:3:0","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#tips-for-writing-good-user-stories"},{"categories":["Productivity"],"collections":null,"content":"Epics are good for a start An epic is a large user story that can be broken into smaller user stories using the checklist defined above. Epics are high-level features or headlines in the system; they can be easily defined in the early stage of product backlog creation. Then, they can be broken down over sprints. Example: As an anonymous user, I want to learn about xyz platform. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:3:1","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#epics-are-good-for-a-start"},{"categories":["Productivity"],"collections":null,"content":"Acceptance Criteria key success factor to done definition Providing clear conditions that describe how the user story will be done from the perspective of the customer or the product owner, is essential for the team to develop it. Quality team use these criteria to generate test cases; so they need to detailed and concise. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:3:2","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#acceptance-criteria-key-success-factor-to-done-definition"},{"categories":["Productivity"],"collections":null,"content":"Business Value should be there Having the business value in the user story body is debatable in some cases. However, it plays an important role in explaining to the team why is this user story a feature to the customer and what does he want to achieve with? Adding the purpose of the story brings up a healthy discussion of \u0026ldquo;I don\u0026rsquo;t believe that the \u0026lt;want to…\u0026gt; part fulfills the \u0026lt;so that I…\u0026gt; part. This is where we face the challenge of implementing what the customer wants not what he needs. Example: As a subscriber user, I want to view product trade information so that I can decide on which product to invest. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:3:3","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#business-value-should-be-there"},{"categories":["Productivity"],"collections":null,"content":"Acceptance Criteria ONLY are not enough We need to enrich our user stories acceptance criteria with workflow diagrams, storyboards, wireframes, mockups or other techniques in order to visualize the product functionality. To be able to create a great user experience (UX), product owners need to support user stories with visual design techniques as part of the user story acceptance criteria. Actually, those techniques play an important role in clarifying the written criteria especially when the end product is a portal or website, they ensure consistency of the design across the pages. ","date":"05-06-2018","objectID":"/posts/productivity/how-to-write-awesome-user-stories/:3:4","tags":null,"title":"How To Write Awesome User Stories","uri":"/posts/productivity/how-to-write-awesome-user-stories/#acceptance-criteria-only-are-not-enough"},{"categories":["Development"],"collections":null,"content":"WordPress is a powerful content management system (CMS) that enables developers to create dynamic websites. To become proficient in WordPress development, one must follow a structured roadmap covering both front-end and back-end technologies. Below is a detailed roadmap to mastering WordPress development. flowchart TB Start((Start)) --\u003e HTMLCSS(HTML / CSS) HTMLCSS --\u003e FE(Front End) FE --- FELEVEL{Level} FE --\u003e BackEnd subgraph FE[Front End] direction TB FELEVEL --\u003e Javascript Javascript --\u003e jQuery Javascript --\u003e AJAX FELEVEL --\u003e React React --\u003e REST(REST API) end subgraph BackEnd[Back End] PHP --\u003e WPCore(WP Core) WPCore --\u003e MySQL end BackEnd --\u003e Finish((Finish)) flowchart TB Start((Start)) --\u003e HTMLCSS(HTML / CSS) HTMLCSS --\u003e FE(Front End) FE --- FELEVEL{Level} FE --\u003e BackEnd subgraph FE[Front End] direction TB FELEVEL --\u003e Javascript Javascript --\u003e jQuery Javascript --\u003e AJAX FELEVEL --\u003e React React --\u003e REST(REST API) end subgraph BackEnd[Back End] PHP --\u003e WPCore(WP Core) WPCore --\u003e MySQL end BackEnd --\u003e Finish((Finish)) flowchart TB Start((Start)) --\u003e HTMLCSS(HTML / CSS) HTMLCSS --\u003e FE(Front End) FE --- FELEVEL{Level} FE --\u003e BackEnd subgraph FE[Front End] direction TB FELEVEL --\u003e Javascript Javascript --\u003e jQuery Javascript --\u003e AJAX FELEVEL --\u003e React React --\u003e REST(REST API) end subgraph BackEnd[Back End] PHP --\u003e WPCore(WP Core) WPCore --\u003e MySQL end BackEnd --\u003e Finish((Finish)) flowchart TB Start((Start)) --\u003e HTMLCSS(HTML / CSS) HTMLCSS --\u003e FE(Front End) FE --- FELEVEL{Level} FE --\u003e BackEnd subgraph FE[Front End] direction TB FELEVEL --\u003e Javascript Javascript --\u003e jQuery Javascript --\u003e AJAX FELEVEL --\u003e React React --\u003e REST(REST API) end subgraph BackEnd[Back End] PHP --\u003e WPCore(WP Core) WPCore --\u003e MySQL end BackEnd --\u003e Finish((Finish)) ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:0:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#"},{"categories":["Development"],"collections":null,"content":"1. Getting Started Before diving into WordPress, it is essential to have a strong foundation in basic web technologies: HTML \u0026amp; CSS: These are fundamental for structuring and styling web pages. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:1:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#1-getting-started"},{"categories":["Development"],"collections":null,"content":"2. Front-End Development Front-end development focuses on designing and structuring the user interface. Key technologies include: ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:2:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#2-front-end-development"},{"categories":["Development"],"collections":null,"content":"Core Technologies JavaScript: The backbone of interactive web development. jQuery: A popular JavaScript library that simplifies DOM manipulation. AJAX (Asynchronous JavaScript and XML): Used for updating web pages dynamically without refreshing. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:2:1","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#core-technologies"},{"categories":["Development"],"collections":null,"content":"Advanced Front-End Development React: A powerful JavaScript library for building dynamic user interfaces. REST API: Enables interaction with WordPress back-end data. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:2:2","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#advanced-front-end-development"},{"categories":["Development"],"collections":null,"content":"3. Back-End Development Back-end development involves working with the WordPress core, databases, and server-side technologies. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:3:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#3-back-end-development"},{"categories":["Development"],"collections":null,"content":"Core Technologies PHP: The primary scripting language for WordPress development. WordPress Core: Understanding the core structure of WordPress and its functionality. MySQL: The database management system used by WordPress for data storage and retrieval. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:3:1","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#core-technologies-1"},{"categories":["Development"],"collections":null,"content":"4. Becoming a Full-Stack WordPress Developer Once proficient in both front-end and back-end technologies, you can start working on complete WordPress projects. This includes: Custom theme and plugin development. Performance optimization. Security best practices. Advanced integrations with third-party APIs. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:4:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#4-becoming-a-full-stack-wordpress-developer"},{"categories":["Development"],"collections":null,"content":"Conclusion Following this roadmap will help developers build expertise in WordPress development, from basic front-end skills to advanced back-end functionalities. Mastering these technologies will enable the creation of high-performing and scalable WordPress websites. ","date":"22-04-2018","objectID":"/posts/development/wordpress-roadmap/:5:0","tags":["wordpress"],"title":"Wordpress Roadmap","uri":"/posts/development/wordpress-roadmap/#conclusion"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"When working with Docker containers and the Apache HTTP Server (httpd) in SSL mode, it is important to monitor and analyze the server logs for debugging and security purposes. However, in some cases, the SSL log might not show up as expected. This article aims to provide a solution to this problem by adding a custom log variable inside the virtual host configuration. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:0:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Problem The SSL log is not being generated or displayed when running the Apache HTTP Server within a Docker container. This can make it difficult to track and analyze server activity and troubleshoot potential issues. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:1:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#problem"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Solution To enable the SSL log to appear, follow these steps: ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:2:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#solution"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 1: Access the Docker Container Start by accessing the Docker container running the Apache HTTP Server. You can do this using the following command: docker exec -it \u0026lt;container_name\u0026gt; bash","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:3:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-1-access-the-docker-container"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 2: Locate the Apache Virtual Host Configuration Next, locate and open the Apache virtual host configuration file specific to your SSL-enabled website. Typically, this file is located at /etc/httpd/conf.d/ssl.conf or /etc/apache2/sites-available/default-ssl.conf, depending on the distribution and version of Apache you are using. Use the appropriate command for your setup: vi /etc/httpd/conf.d/ssl.confor vi /etc/apache2/sites-available/default-ssl.conf","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:4:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-2-locate-the-apache-virtual-host-configuration"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 3: Add Custom Log Variable Inside the virtual host configuration file, add the following line: CustomLog /proc/self/fd/1 commonThis line specifies the location of the log file, /proc/self/fd/1, which is a special file that represents the standard output (stdout) of the Docker container. The common format is a commonly used log format, but you can modify it according to your specific requirements. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:5:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-3-add-custom-log-variable"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 4: Save and Exit Save the changes to the virtual host configuration file and exit the text editor. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:6:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-4-save-and-exit"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 5: Restart Apache HTTP Server Restart the Apache HTTP Server within the Docker container to apply the changes. Depending on your distribution and setup, you can use one of the following commands: service httpd restartor service apache2 restartAlternatively, you can restart the entire Docker container if that\u0026rsquo;s more convenient: docker restart \u0026lt;container_name\u0026gt;","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:7:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-5-restart-apache-http-server"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Step 6: Verify Log Output Once the Apache HTTP Server has restarted, the SSL log should start appearing on the console or terminal where you are running the Docker container. You should now see the server logs, including SSL-related information. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:8:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#step-6-verify-log-output"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Conclusion By adding the custom log variable /proc/self/fd/1 to the virtual host configuration file, you can redirect the SSL log output to the standard output of the Docker container. This allows you to monitor and analyze the SSL log easily, helping with troubleshooting and ensuring the security and performance of your SSL-enabled website running within a Docker container. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/:9:0","tags":["docker"],"title":"Troubleshooting Docker HTTPD SSL Log Not Appearing","uri":"/posts/devops/troubleshooting-docker-httpd-ssl-log-not-appearing/#conclusion"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"When working with Docker Compose, you may encounter an \u0026ldquo;Invalid IP Host\u0026rdquo; error when configuring the HTTPD (Apache) container. This error often occurs when the container\u0026rsquo;s network configuration conflicts with the port mappings specified in the docker-compose.yml file. In this article, we will explore a common cause of this error and provide a solution by adjusting the network configuration using the net parameter. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:0:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Understanding the \u0026ldquo;Invalid IP Host\u0026rdquo; Error The \u0026ldquo;Invalid IP Host\u0026rdquo; error message typically appears when running an HTTPD container in Docker Compose and indicates that the IP address provided for the container\u0026rsquo;s host is invalid or conflicting. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:1:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#understanding-the-invalid-ip-host-error"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Cause of the Error The error often occurs when the container\u0026rsquo;s network configuration conflicts with the port mappings defined in the docker-compose.yml file. Specifically, when using the net: \u0026quot;host\u0026quot; parameter, it overwrites the port mappings, rendering them useless. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:2:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#cause-of-the-error"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Identifying the Issue If you encounter the \u0026ldquo;Invalid IP Host\u0026rdquo; error, double-check your docker-compose.yml file for any conflicts between the net parameter and the port mappings. Ensure that you haven\u0026rsquo;t inadvertently used both simultaneously. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:3:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#identifying-the-issue"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Solution: Adjusting Network Configuration To resolve the \u0026ldquo;Invalid IP Host\u0026rdquo; error, you can modify the network configuration of the HTTPD container in the docker-compose.yml file. Follow these steps: Remove the net: \u0026quot;host\u0026quot; parameter from the HTTPD service section to prevent conflicts between the network configuration and port mappings. Instead, rely on Docker Compose\u0026rsquo;s default network configuration, which enables communication between containers and the host using the container\u0026rsquo;s IP address and the exposed ports. Here\u0026rsquo;s an example docker-compose.yml file after the modification: version: \u0026#34;3\u0026#34; services: httpd: image: httpd:latest ports: - \u0026#34;80:80\u0026#34; ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:4:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#solution-adjusting-network-configuration"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Applying the Modified Configuration After updating the docker-compose.yml file, save the changes and rebuild the Docker Compose environment using the following command: docker-compose up --buildDocker Compose will now create the HTTPD container with the adjusted network configuration, allowing it to communicate with the host through the specified port mappings. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:5:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#applying-the-modified-configuration"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Conclusion The \u0026ldquo;Invalid IP Host\u0026rdquo; error in Docker Compose with the HTTPD container often occurs when conflicting network configurations are present, particularly when using the net: \u0026quot;host\u0026quot; parameter alongside port mappings. By adjusting the network configuration and removing the net parameter, you can resolve this error and successfully deploy the HTTPD container with the desired communication setup. Remember to refer to the official Docker and Docker Compose documentation for further details and to explore additional configuration options. ","date":"27-03-2018","objectID":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/:6:0","tags":["docker"],"title":"Troubleshooting Invalid IP Host Error in Docker Compose with HTTPD Container","uri":"/posts/devops/troubleshooting--invalid-ip-host--error-in-docker-compose-with-httpd-container/#conclusion"},{"categories":["Development"],"collections":null,"content":"When you\u0026rsquo;re working on a local development environment and need to accept self-signed HTTPS certificates through Apache2, you can use the following configuration in your virtual host file. This setup allows you to bypass certificate verification for local testing purposes. Assuming you already have a virtual host set up in your Apache2 configuration, here are the steps to configure it to accept self-signed HTTPS certificates: Enable the proxy and proxy_http modules: Before configuring the SSL proxy, make sure the required modules are enabled. You can do this using the a2enmod command. sudo a2enmod proxy sudo a2enmod proxy_http Edit your Virtual Host Configuration: Open your virtual host configuration file. This is typically located in the /etc/apache2/sites-available/ directory and has a .conf extension, e.g., your-site.conf. sudo nano /etc/apache2/sites-available/your-site.conf Inside your virtual host configuration, add or modify the following lines to enable the SSL proxy settings: \u0026lt;VirtualHost *:80\u0026gt; # ... Other Virtual Host Settings ... # Enable SSL Proxy SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off ProxyPass / https://localhost:443/ ProxyPassReverse / https://localhost:443/ \u0026lt;/VirtualHost\u0026gt; Ensure that you replace your-site.conf with the actual filename of your virtual host configuration and adjust the ProxyPass and ProxyPassReverse directives to match your specific setup. Save and Exit: Save the changes to your configuration file and exit the text editor. Enable the Virtual Host: Enable your virtual host configuration if it\u0026rsquo;s not already enabled: sudo a2ensite your-site.conf Restart Apache: Restart Apache to apply the changes: sudo systemctl restart apache2 Now, your Apache2 server should accept self-signed HTTPS certificates for the specified virtual host. Be cautious when using these settings in a production environment, as they disable important security checks. These settings are primarily for local development and debugging purposes. ","date":"25-03-2018","objectID":"/posts/development/accept-https-self-certificate-on-local-via-apache2/:0:0","tags":null,"title":"Accept HTTPS Self Certificate on Local via Apache2","uri":"/posts/development/accept-https-self-certificate-on-local-via-apache2/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Syncing your iPhone photos with your Mac is a convenient way to transfer and organize your images. However, you may encounter issues where photos get stuck during the synchronization process. This guide will provide step-by-step instructions on how to effectively resolve this problem. We\u0026rsquo;ll explore using iFunBox to access the RAW folder on your iPhone, removing specific folders, and then resyncing with iTunes to ensure your photos sync smoothly again. Note: It\u0026rsquo;s important to have a recent backup of your iPhone data before proceeding, as the following steps involve manipulating system files and require basic technical knowledge. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:0:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 1: Install iFunBox and Connect Your iPhone Begin by downloading and installing iFunBox, a file and app management tool for iOS devices, on your Mac. Launch iFunBox and connect your iPhone to your computer using a USB cable. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:1:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#step-1-install-ifunbox-and-connect-your-iphone"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 2: Access the RAW Folder Once your iPhone is connected and recognized by iFunBox, locate the \u0026ldquo;Raw File System\u0026rdquo; option in the iFunBox sidebar and click on it. Within the \u0026ldquo;Raw File System,\u0026rdquo; navigate to the \u0026ldquo;DCIM\u0026rdquo; folder. This folder contains your iPhone\u0026rsquo;s photos and videos. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:2:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#step-2-access-the-raw-folder"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 3: Remove Problematic Folders Within the \u0026ldquo;DCIM\u0026rdquo; folder, locate and delete the following two folders: \u0026ldquo;PhotoData\u0026rdquo; and \u0026ldquo;Photos.\u0026rdquo; Deleting these folders will not permanently erase your photos; they will be recreated when you resync with iTunes. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:3:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#step-3-remove-problematic-folders"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Step 4: Resync with iTunes Disconnect your iPhone from the computer and launch iTunes. Reconnect your iPhone to the computer using the USB cable. In iTunes, select your iPhone from the device list. Go to the \u0026ldquo;Photos\u0026rdquo; tab and ensure that the \u0026ldquo;Sync Photos\u0026rdquo; option is checked. Choose the desired photo albums or folders you want to sync with your iPhone. Click the \u0026ldquo;Apply\u0026rdquo; or \u0026ldquo;Sync\u0026rdquo; button to initiate the synchronization process. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:4:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#step-4-resync-with-itunes"},{"categories":["Software","Troubleshooting"],"collections":null,"content":"Conclusion By following these steps, you can resolve the issue of iPhone photos getting stuck during the sync process with your Mac. Remember to exercise caution when manipulating system files and always maintain a backup of your data. ","date":"23-03-2018","objectID":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/:5:0","tags":["mac","photos"],"title":"Fixing iPhone Photos Stuck When Syncing with Mac","uri":"/posts/software/fixing-iphone-photos-stuck-when-syncing-with-mac/#conclusion"},{"categories":["Development"],"collections":null,"content":"You want to run Docker commands to request a wildcard SSL certificate from Let\u0026rsquo;s Encrypt using Certbot and to create or renew a certificate for a specific domain. Here\u0026rsquo;s a breakdown of the commands and what they do: ","date":"14-03-2018","objectID":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/:0:0","tags":null,"title":"Docker Run -It --Rm --Name Certbot -v --Root-Docker-Composes-Server","uri":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/#"},{"categories":["Development"],"collections":null,"content":"Requesting a Wildcard Certificate with Certbot docker run -it --rm --name certbot -v \u0026#34;/root/docker-composes/server/apache/letsencrypt:/etc/letsencrypt\u0026#34; -v \u0026#34;/var/lib/letsencrypt:/var/lib/letsencrypt\u0026#34; -v \u0026#34;/var/www/html:/var/www/html\u0026#34; certbot/certbot --manual --preferred-challenges dns certonly --server https://acme-v02.api.letsencrypt.org/directory docker run: This command starts a new Docker container. -it: This flag enables an interactive terminal session. --rm: This flag removes the container when it exits. --name certbot: Specifies the name of the container as \u0026ldquo;certbot.\u0026rdquo; -v: This flag mounts volumes from the host to the container. You are mounting the following directories: \u0026quot;/root/docker-composes/server/apache/letsencrypt\u0026quot; to \u0026quot;/etc/letsencrypt\u0026quot; inside the container. \u0026quot;/var/lib/letsencrypt\u0026quot; to \u0026quot;/var/lib/letsencrypt\u0026quot; inside the container. \u0026quot;/var/www/html\u0026quot; to \u0026quot;/var/www/html\u0026quot; inside the container. certbot/certbot: Specifies the Docker image to use, which is the Certbot image. --manual: Indicates that you want to perform manual DNS challenges to prove domain ownership. --preferred-challenges dns: Specifies that you prefer DNS challenges for domain verification. certonly: Instructs Certbot to obtain certificates without installing them. --server https://acme-v02.api.letsencrypt.org/directory: Sets the Let\u0026rsquo;s Encrypt ACME server\u0026rsquo;s URL for certificate issuance. ","date":"14-03-2018","objectID":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/:0:1","tags":null,"title":"Docker Run -It --Rm --Name Certbot -v --Root-Docker-Composes-Server","uri":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/#requesting-a-wildcard-certificate-with-certbot"},{"categories":["Development"],"collections":null,"content":"Creating/Renewing a Certonly Certificate certbot certonly --webroot -w /var/www/html/ -d example.com certbot certonly: This command tells Certbot to obtain a certificate without installing it. --webroot: Specifies the webroot plugin for authentication and authorization. -w /var/www/html/: Specifies the webroot path where Certbot should place challenge files. -d example.com: Specifies the domain (example.com in this case) for which you want to obtain or renew the certificate. These commands allow you to request a wildcard SSL certificate and create or renew a standard SSL certificate using Certbot in a Docker container. Make sure to replace \u0026ldquo;example.com\u0026rdquo; with your actual domain name when running the second command. Remember to properly configure your web server to use the obtained SSL certificates for secure communication. ","date":"14-03-2018","objectID":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/:0:2","tags":null,"title":"Docker Run -It --Rm --Name Certbot -v --Root-Docker-Composes-Server","uri":"/posts/development/docker-run-it-rm-name-certbot-v-root-docker-composes-server/#creatingrenewing-a-certonly-certificate"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"When encountering Cloudflare Error 522 or ERR_NETWORK while using the internet, one possible cause could be the configuration of an Asus router with the Firewall DDOS feature enabled. This article aims to shed light on this issue and provide a solution for resolving it. However, it\u0026rsquo;s important to note that disabling the DDOS feature on the router may increase the risk of potential DDOS attacks from external sources. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:0:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Understanding Cloudflare Error 522 or ERR_NETWORK Cloudflare Error 522 or ERR_NETWORK is an HTTP error code that occurs when a TCP connection between the origin server and Cloudflare is unable to establish within a specific timeframe. This error often manifests as a prolonged loading time or complete unavailability of a website. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:1:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#understanding-cloudflare-error-522-or-err_network"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Identifying the Asus Router Configuration Issue If you encounter Error 522 or ERR_NETWORK and have an Asus router with the Firewall DDOS feature enabled, it\u0026rsquo;s worth investigating this configuration as a potential cause. The Firewall DDOS feature is designed to mitigate distributed denial-of-service (DDOS) attacks by blocking suspicious or excessive incoming network traffic. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:2:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#identifying-the-asus-router-configuration-issue"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Solution: Disabling DDOS on the Asus Router To resolve the Cloudflare Error 522 or ERR_NETWORK associated with the Asus router\u0026rsquo;s Firewall DDOS configuration, follow these steps to disable the feature: Open a web browser and enter the default gateway IP address of your Asus router in the address bar. Typically, this is \u0026ldquo;192.168.1.1\u0026rdquo; or \u0026ldquo;192.168.0.1\u0026rdquo;. Press Enter. Enter your router\u0026rsquo;s username and password to access the admin interface. If you haven\u0026rsquo;t changed these credentials, consult your router\u0026rsquo;s documentation or try the default values (e.g., \u0026ldquo;admin\u0026rdquo; for both username and password). Once logged in, navigate to the firewall settings section of the router configuration. Locate the DDOS or Anti-DDOS settings and disable the feature. The specific steps may vary depending on the router\u0026rsquo;s firmware version and interface design. Save the changes and exit the router configuration interface. Restart your router to apply the new settings. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:3:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#solution-disabling-ddos-on-the-asus-router"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Important Considerations Disabling the DDOS feature on your Asus router can potentially increase the risk of a DDOS attack from external sources. It is crucial to assess the security implications and consider alternative security measures to safeguard your network and devices. Some alternatives to mitigate DDOS attacks include employing a dedicated DDOS protection service or consulting with a network security professional. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:4:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#important-considerations"},{"categories":["DevOps","Troubleshooting"],"collections":null,"content":"Conclusion Cloudflare Error 522 or ERR_NETWORK can be caused by the Firewall DDOS configuration on an Asus router. By following the steps mentioned above, you can disable the DDOS feature to resolve the issue. However, it\u0026rsquo;s essential to understand the potential security risks associated with disabling this feature and take appropriate measures to ensure the protection of your network. ","date":"21-02-2018","objectID":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/:5:0","tags":["cloudflare"],"title":"Troubleshooting Cloudflare Error 522 or ERR_NETWORK Router Asus and Firewall DDOS Configuration","uri":"/posts/devops/troubleshooting-cloudflare-error-522-or-err-network-router-asus-and-firewall-ddos-configuration/#conclusion"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Tmux is a popular terminal multiplexer that allows users to manage multiple terminal sessions within a single window. However, some users may encounter an issue where Tmux generates random characters when clicking or scrolling within the terminal. This can be frustrating and disrupt the user\u0026rsquo;s workflow. In this article, we will explore a solution to fix this problem. ","date":"18-02-2018","objectID":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/:0:0","tags":["mac","tmux"],"title":"Fixing Tmux Generating Random Characters on Click or Scroll","uri":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Solution To resolve the issue of Tmux generating random characters on click or scroll, you can follow the steps outlined below: Resetting the mouse mode: Tmux uses different modes for handling mouse input. By resetting the mouse mode, you can often fix the issue. There are a few methods you can try to reset the mouse mode: a. Using the Tmux command prompt: Start by entering the Tmux command prompt. You can do this by pressing the prefix key (usually Ctrl+b or Ctrl+a) followed by a colon (:). Once in the command prompt, type the following command and press Enter: set -g mouse offb. Using the shell command: Alternatively, you can run the following shell command directly from your terminal to turn off the mouse mode: tmux set -g mouse off Reloading the Tmux configuration: If resetting the mouse mode doesn\u0026rsquo;t resolve the issue, you can try reloading the Tmux configuration. This can be done by executing the following command in your terminal: tmux source-file ~/.tmux.confThis command reloads the Tmux configuration from the specified file location. Make sure to replace ~/.tmux.conf with the actual path to your Tmux configuration file if it is located elsewhere. Resetting the terminal: If the above steps don\u0026rsquo;t solve the problem, you can try resetting the terminal itself. There are a couple of methods you can use to reset the terminal: a. Using the \u0026ldquo;reset\u0026rdquo; command: Type the following command in your terminal and press Enter: resetThis command resets the terminal, clearing any unusual behavior or settings that may be causing the issue. b. Using Vim: Another option is to open Vim and exit immediately. This can be done by running the following command: vim +qOpening Vim and exiting will often reset the terminal to its default state, potentially resolving the problem. ","date":"18-02-2018","objectID":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/:1:0","tags":["mac","tmux"],"title":"Fixing Tmux Generating Random Characters on Click or Scroll","uri":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/#solution"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion If you encounter the issue of Tmux generating random characters on click or scroll, it can be frustrating. However, by following the steps outlined in this article, you should be able to resolve the problem. Start by resetting the mouse mode in Tmux, reload the Tmux configuration if necessary, and consider resetting the terminal using the \u0026ldquo;reset\u0026rdquo; command or by briefly opening Vim. These steps should help restore normal functionality to your Tmux session. ","date":"18-02-2018","objectID":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/:2:0","tags":["mac","tmux"],"title":"Fixing Tmux Generating Random Characters on Click or Scroll","uri":"/posts/development/fixing-tmux-generating-random-characters-on-click-or-scroll/#conclusion"},{"categories":["Development"],"collections":null,"content":"When working with images, especially those captured by digital cameras or smartphones, you might encounter a lot of metadata embedded within the image files. This metadata often includes valuable information such as camera settings, date and time of capture, and even geolocation. This information is stored in a format known as Exchangeable Image File Format (EXIF). While EXIF data can be useful, there are situations where you might want to remove it for privacy, security, or optimization reasons. ImageMagick, a popular software suite for image manipulation, provides an easy way to strip EXIF data from images using the command line. Here are two methods to achieve this: ","date":"22-11-2017","objectID":"/posts/development/stripping-exif-data-from-images-using-imagemagick/:0:0","tags":null,"title":"Stripping EXIF Data from Images using ImageMagick","uri":"/posts/development/stripping-exif-data-from-images-using-imagemagick/#"},{"categories":["Development"],"collections":null,"content":"Method 1: Using the convert Command The convert command in ImageMagick can be used to convert and manipulate images. By combining it with the -strip option, you can remove all profile and comment information, including EXIF data, from an image. The following command demonstrates how to use convert to achieve this: convert orig.jpg -strip result.jpg In this command, replace orig.jpg with the path to your input image and result.jpg with the desired name for the output image. ","date":"22-11-2017","objectID":"/posts/development/stripping-exif-data-from-images-using-imagemagick/:0:1","tags":null,"title":"Stripping EXIF Data from Images using ImageMagick","uri":"/posts/development/stripping-exif-data-from-images-using-imagemagick/#method-1-using-the-convert-command"},{"categories":["Development"],"collections":null,"content":"Method 2: Using the mogrify Command The mogrify command is another powerful tool in the ImageMagick toolkit. It allows you to perform in-place image modifications, including stripping EXIF data. To use mogrify to remove EXIF data from an image, run the following command: mogrify -strip orig.jpg Replace orig.jpg with the actual name of the image you want to process. ","date":"22-11-2017","objectID":"/posts/development/stripping-exif-data-from-images-using-imagemagick/:0:2","tags":null,"title":"Stripping EXIF Data from Images using ImageMagick","uri":"/posts/development/stripping-exif-data-from-images-using-imagemagick/#method-2-using-the-mogrify-command"},{"categories":["Development"],"collections":null,"content":"Viewing EXIF Information using the identify Command On the flip side, if you\u0026rsquo;re interested in exploring the EXIF data stored in an image, the identify command in ImageMagick can provide you with detailed information. By using the -verbose option, you can get a comprehensive overview of the image\u0026rsquo;s properties, including EXIF data. To view EXIF information using the identify command, use the following syntax: identify -verbose /usr/share/backgrounds/WildWheat_by_Brian_Burt.jpg Remember to replace /usr/share/backgrounds/WildWheat_by_Brian_Burt.jpg with the actual path to the image you want to inspect. By employing these ImageMagick commands, you can seamlessly manipulate and explore EXIF data in your images, tailoring them to your specific needs. Whether you\u0026rsquo;re stripping EXIF data for privacy or extracting it for analysis, ImageMagick provides the tools to get the job done efficiently. ","date":"22-11-2017","objectID":"/posts/development/stripping-exif-data-from-images-using-imagemagick/:1:0","tags":null,"title":"Stripping EXIF Data from Images using ImageMagick","uri":"/posts/development/stripping-exif-data-from-images-using-imagemagick/#viewing-exif-information-using-the-identify-command"},{"categories":["Development"],"collections":null,"content":"We have provided a set of command-line instructions for managing backup files. These commands are designed to remove older backup files based on certain criteria. Let me break down each of the commands for you: Removing Old Backup Files Using ls, tail, and xargs: ls -t1 | tail -n +11 | xargs -d \u0026#39;\\n\u0026#39; rm This command is used to list files in a directory by their modification time in descending order (-t1). Then, it uses tail to exclude the top 10 files (keeping the 11th and onwards), and finally, it uses xargs to remove these older files. Removing Old Backup Files Using rm and ls: cd /DataVolume/shares/john/Backups/DockerBackup/example-server rm -f `ls -t ??-??-??.tgz.gpg | sed 1,5d` cd ~ This set of commands does the following: It navigates to the specified directory. It lists files matching the pattern ??-??-??.tgz.gpg in descending order of modification time using ls -t. It uses sed 1,5d to remove the first 5 lines (files) from the list. Finally, it removes the remaining files using rm. Using the \u0026lsquo;drive\u0026rsquo; CLI to Push and Delete Backup Files: drive push --no-prompt -destination backups/example-server/Dockers/ *.gpg drive delete --quiet `drive list --files backups/example-server/Dockers/ | grep gpg | cut -c 2- | sed 1,5d` These commands involve the \u0026ldquo;drive\u0026rdquo; CLI for interacting with Google Drive. Here\u0026rsquo;s what they do: The first command pushes all *.gpg files from the current directory to the specified destination in Google Drive, without prompting for confirmation. The second command deletes the oldest 5 *.gpg files in the \u0026ldquo;backups/example-server/Dockers/\u0026rdquo; directory on Google Drive. It lists the files, filters for those with \u0026ldquo;gpg\u0026rdquo; in their names, removes the first 5 files using sed, and then deletes them using drive delete. Please note that these commands should be used with caution, especially the ones involving file deletion, as there is no confirmation prompt, and data loss can occur if used incorrectly. Always make sure you have a backup or a way to recover files before running such commands in a production environment. ","date":"12-10-2017","objectID":"/posts/development/cli-remove-old-backup-file-max-5-older/:0:0","tags":null,"title":"CLI Remove old backup file max 5 older","uri":"/posts/development/cli-remove-old-backup-file-max-5-older/#"},{"categories":["Development"],"collections":null,"content":"In this article, we\u0026rsquo;ll explore how to update the metadata of video files (in this case, MP4 files) based on their modified date using the powerful Exiftool command-line utility. Metadata can include various information about the video, such as creation date, author, and more. Sometimes, it\u0026rsquo;s necessary to adjust or correct this metadata. Here, we\u0026rsquo;ll focus on modifying the date-related metadata using Exiftool. ","date":"09-10-2017","objectID":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/:0:0","tags":null,"title":"Updating Video Metadata Based on Modified Date Using Exiftool","uri":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have Exiftool installed on your system. You can download it from the official website: Exiftool ","date":"09-10-2017","objectID":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/:1:0","tags":null,"title":"Updating Video Metadata Based on Modified Date Using Exiftool","uri":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Updating Video Metadata We\u0026rsquo;ll use a simple Bash loop to process all the MP4 files in a directory and update their metadata based on the modified date of the file. Here\u0026rsquo;s the script you provided: for i in *.mp4; do echo \u0026#34;Processing $i\u0026#34; exiftool \u0026#34;-*date\u0026lt;filename\u0026#34; -wm w \u0026#34;$i\u0026#34; done Let\u0026rsquo;s break down what this script does: The for loop iterates through all files in the current directory with the .mp4 extension. Inside the loop, exiftool is used to update the metadata of each video file. The command \u0026quot;-*date\u0026lt;filename\u0026quot; copies the modified date of the file to various date-related metadata tags in the video file. -wm w writes the changes to the file. The echo statement provides a simple progress update, displaying the name of the file being processed. ","date":"09-10-2017","objectID":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/:2:0","tags":null,"title":"Updating Video Metadata Based on Modified Date Using Exiftool","uri":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/#updating-video-metadata"},{"categories":["Development"],"collections":null,"content":"Viewing Modified Metadata If you want to check the modified metadata of a specific video file (e.g., video-2010-11-13-16-20-31.mp4), you can use the following command: exiftool -a -s -G1 -time:all video-2010-11-13-16-20-31.mp4 This command uses Exiftool to display all date-related metadata for the specified video file. Here\u0026rsquo;s what each option does: -a: Processes all tags. -s: Show tag names in a short format. -G1: Display one tag group per line to make the output more readable. -time:all: Shows all date-related metadata. ","date":"09-10-2017","objectID":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/:3:0","tags":null,"title":"Updating Video Metadata Based on Modified Date Using Exiftool","uri":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/#viewing-modified-metadata"},{"categories":["Development"],"collections":null,"content":"Conclusion Exiftool is a versatile tool for managing metadata in various file types, including video files like MP4. By using the provided script, you can easily update the metadata of multiple video files based on their modified date. Additionally, the second command allows you to inspect the modified metadata for a specific video file, ensuring your changes were applied correctly. ","date":"09-10-2017","objectID":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/:4:0","tags":null,"title":"Updating Video Metadata Based on Modified Date Using Exiftool","uri":"/posts/development/updating-video-metadata-based-on-modified-date-using-exiftool/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Docker, it\u0026rsquo;s essential to keep your images up to date, especially when using various containers and applications. One way to do this efficiently is by pulling the latest versions of all Docker images with a single command. In this article, we\u0026rsquo;ll walk you through the process using a command-line interface (CLI). ","date":"17-09-2017","objectID":"/posts/development/pulling-the-latest-versions-of-all-docker-images/:0:0","tags":null,"title":"Pulling the Latest Versions of All Docker Images","uri":"/posts/development/pulling-the-latest-versions-of-all-docker-images/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before proceeding, make sure you have Docker installed and configured on your system. You can download and install Docker from the official website: Docker Download. ","date":"17-09-2017","objectID":"/posts/development/pulling-the-latest-versions-of-all-docker-images/:1:0","tags":null,"title":"Pulling the Latest Versions of All Docker Images","uri":"/posts/development/pulling-the-latest-versions-of-all-docker-images/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Pulling the Latest Versions of All Docker Images To pull the latest versions of all Docker images, you can use a combination of Docker commands, docker images, grep, and xargs. Here\u0026rsquo;s a breakdown of the command: docker images --format \u0026#34;{{.Repository}}:{{.Tag}}\u0026#34; | grep :latest | xargs -L1 docker pull Let\u0026rsquo;s break down this command step by step: docker images --format \u0026quot;{{.Repository}}:{{.Tag}}\u0026quot;: This command lists all Docker images on your system in the format repository:tag. For example, an image might be listed as nginx:latest. grep :latest: This part of the command filters the image list to only include images with the :latest tag. This is important because we want to pull the latest versions of these images. xargs -L1 docker pull: Finally, xargs reads each line of the filtered image list and runs docker pull for each image. This effectively pulls the latest version of each image. ","date":"17-09-2017","objectID":"/posts/development/pulling-the-latest-versions-of-all-docker-images/:2:0","tags":null,"title":"Pulling the Latest Versions of All Docker Images","uri":"/posts/development/pulling-the-latest-versions-of-all-docker-images/#pulling-the-latest-versions-of-all-docker-images"},{"categories":["Development"],"collections":null,"content":"Running the Command Open a terminal window and run the command: docker images --format \u0026#34;{{.Repository}}:{{.Tag}}\u0026#34; | grep :latest | xargs -L1 docker pull Docker will start pulling the latest versions of all images with the :latest tag. This may take some time, depending on the number and size of the images. ","date":"17-09-2017","objectID":"/posts/development/pulling-the-latest-versions-of-all-docker-images/:3:0","tags":null,"title":"Pulling the Latest Versions of All Docker Images","uri":"/posts/development/pulling-the-latest-versions-of-all-docker-images/#running-the-command"},{"categories":["Development"],"collections":null,"content":"Conclusion Regularly updating your Docker images to their latest versions is essential for security, bug fixes, and new features. The command provided allows you to automate this process, making it more convenient to keep your Docker environment up to date. Remember to perform this operation periodically to ensure that your Docker containers are running the latest and most secure versions of the images they depend on. ","date":"17-09-2017","objectID":"/posts/development/pulling-the-latest-versions-of-all-docker-images/:4:0","tags":null,"title":"Pulling the Latest Versions of All Docker Images","uri":"/posts/development/pulling-the-latest-versions-of-all-docker-images/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re a macOS user who relies on Homebrew and Brew Cask for managing software packages and applications, keeping everything up to date is crucial. This guide outlines how to update and upgrade Brew Cask apps through the command line interface (CLI). Follow the steps below to ensure that your software is always current. ","date":"05-08-2017","objectID":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/:0:0","tags":null,"title":"Updating and Upgrading Brew Cask Apps via Command Line Interface","uri":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you start updating and upgrading your Brew Cask apps, make sure you have already updated Homebrew itself. To do this, open your terminal and run: brew update This command fetches the latest package information from the Homebrew repository. ","date":"05-08-2017","objectID":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/:1:0","tags":null,"title":"Updating and Upgrading Brew Cask Apps via Command Line Interface","uri":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Updating Brew Cask Apps To update your Brew Cask apps, you need to perform two steps: Upgrade Brew: Ensure that your Homebrew installation is up to date by running: brew upgrade This command will update Homebrew and all its formulas. Update Brew Cask Apps: After upgrading Homebrew, you can proceed to update Brew Cask apps. First, list the outdated Cask apps with: brew cask outdated --greedy --verbose | grep -v \u0026#39;(latest)\u0026#39; | awk \u0026#39;{print $1}\u0026#39; This command lists the outdated apps, excluding those already at the latest version. Finally, update these outdated apps by using the xargs command in combination with brew cask reinstall: brew cask outdated --greedy --verbose | grep -v \u0026#39;(latest)\u0026#39; | awk \u0026#39;{print $1}\u0026#39; | xargs brew cask reinstall This command will reinstall the outdated apps, effectively updating them to their latest versions. By following these steps, you\u0026rsquo;ll ensure that both Homebrew and your Brew Cask apps are regularly updated, keeping your macOS software ecosystem current and secure. Remember to periodically run these commands to stay up to date with the latest software releases and security patches for the applications installed via Brew Cask. ","date":"05-08-2017","objectID":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/:2:0","tags":null,"title":"Updating and Upgrading Brew Cask Apps via Command Line Interface","uri":"/posts/development/updating-and-upgrading-brew-cask-apps-via-command-line-interface/#updating-brew-cask-apps"},{"categories":["Development"],"collections":null,"content":"When deploying applications to Google Cloud App Engine, you might want to disable the automatic traffic promotion feature, which promotes the latest deployed version to receive 100% of the traffic by default. This can be useful when you want to manually control when a new version of your application should start receiving traffic. To disable auto traffic promotion, you can use the following command in your terminal: $ gcloud config set app/promote_by_default false This command configures your Google Cloud CLI (gcloud) to not automatically promote the latest deployed version. Here\u0026rsquo;s what each part of the command does: gcloud config: This is the command to interact with the configuration settings of your Google Cloud SDK. set app/promote_by_default false: This specific configuration setting tells App Engine not to automatically promote new versions to receive traffic. By setting app/promote_by_default to false, you ensure that newly deployed versions will not automatically receive traffic. Instead, you\u0026rsquo;ll need to manually control when and how much traffic is routed to each version of your application. This can be particularly useful for scenarios where you want to perform extensive testing on a new version before making it the default version or gradually rolling out updates to a subset of your users. Remember that with this setting in place, you will need to manually promote versions to start receiving traffic. You can do this using the Google Cloud Console or the gcloud command-line tool, specifying the percentage of traffic you want to direct to a specific version. In summary, the gcloud config set app/promote_by_default false command is a handy way to disable automatic traffic promotion for your Google Cloud App Engine deployments, giving you more control over when and how you route traffic to different versions of your application. ","date":"09-06-2017","objectID":"/posts/development/disabling-auto-traffic-promotion-in-google-cloud-app-engine/:0:0","tags":null,"title":"Disabling Auto Traffic Promotion in Google Cloud App Engine","uri":"/posts/development/disabling-auto-traffic-promotion-in-google-cloud-app-engine/#"},{"categories":["Development"],"collections":null,"content":"The updatedb.mlocate service is responsible for updating the mlocate database, which can sometimes consume a significant amount of disk I/O. Here, we\u0026rsquo;ll explain how to disable and enable this service as well as provide a tip for customizing which directories are indexed. ","date":"13-03-2017","objectID":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/:0:0","tags":null,"title":"How to Disable/Enable updatedb.mlocate to Reduce Disk IO","uri":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/#"},{"categories":["Development"],"collections":null,"content":"Disable updatedb.mlocate To disable the updatedb.mlocate service, follow these steps: Kill the Running Process: sudo killall updatedb.mlocate This command will stop any currently running updatedb.mlocate processes. Prevent Automatic Execution: Disable the automatic execution of the updatedb.mlocate service by removing its execute permission: sudo chmod -x /etc/cron.daily/mlocate This step ensures that the updatedb.mlocate script won\u0026rsquo;t run automatically. Delete the Existing Database: You can also delete the existing mlocate database to free up disk space: sudo rm /var/lib/mlocate/mlocate.db Deleting the database is optional but can help save disk space. ","date":"13-03-2017","objectID":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/:1:0","tags":null,"title":"How to Disable/Enable updatedb.mlocate to Reduce Disk IO","uri":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/#disable-updatedbmlocate"},{"categories":["Development"],"collections":null,"content":"Enable updatedb.mlocate If you wish to enable updatedb.mlocate again, follow these steps: Grant Execute Permission: Grant execute permission to the mlocate script in the daily cron directory: sudo chmod +x /etc/cron.daily/mlocate This step allows the script to run automatically. ","date":"13-03-2017","objectID":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/:2:0","tags":null,"title":"How to Disable/Enable updatedb.mlocate to Reduce Disk IO","uri":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/#enable-updatedbmlocate"},{"categories":["Development"],"collections":null,"content":"Customize Indexed Directories Additionally, if you want to customize which directories are indexed by updatedb.mlocate, you can edit the configuration file /etc/updatedb.conf: Open the configuration file in a text editor (e.g., nano or vi): sudo nano /etc/updatedb.conf Locate the PRUNEPATHS variable. It contains a list of paths that should be excluded from indexing. Add the directories you want to exclude to the PRUNEPATHS variable. For example: PRUNEPATHS=\u0026#34;/tmp /var/spool /media /home/myuser/private\u0026#34; Replace the paths in the example with the directories you want to exclude. Save the changes and exit the text editor. These steps allow you to fine-tune which directories are indexed by updatedb.mlocate, helping you further control disk I/O and the scope of your mlocate database. Remember to use these commands with caution, as they can impact the functionality of the mlocate service on your system. ","date":"13-03-2017","objectID":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/:3:0","tags":null,"title":"How to Disable/Enable updatedb.mlocate to Reduce Disk IO","uri":"/posts/development/how-to-disableenable-updatedbmlocate-to-reduce-disk-io/#customize-indexed-directories"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk you through the process of installing PHP on a Windows machine using Chocolatey (choco) package manager. We will also show you how to enable the necessary modules in the php.ini configuration file to run popular PHP applications like the Laravel framework. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:0:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following prerequisites in place: Windows Operating System: This guide is specifically for Windows. Chocolatey Installed: If you don\u0026rsquo;t have Chocolatey installed, you can install it by following the instructions on the Chocolatey website. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:1:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Install PHP with Chocolatey Open a command prompt or PowerShell window with administrator privileges and run the following command to install PHP: choco install php Chocolatey will download and install PHP along with its dependencies. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:2:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#step-1-install-php-with-chocolatey"},{"categories":["Development"],"collections":null,"content":"Step 2: Locate the php.ini Configuration File The php.ini file is the configuration file for PHP. You will need to locate and edit this file to enable the required modules for your application. By default, Chocolatey installs PHP in the C:\\ProgramData\\chocolatey\\lib\\php\\tools directory. The php.ini file is usually found in this path: C:\\ProgramData\\chocolatey\\lib\\php\\tools\\php.iniYou can use a text editor like Notepad or Visual Studio Code to edit this file. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:3:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#step-2-locate-the-phpini-configuration-file"},{"categories":["Development"],"collections":null,"content":"Step 3: Enable PHP Modules To run a framework like Laravel or any other PHP application, you may need to enable specific PHP modules. Open the php.ini file and search for the following lines: ;extension=gd ;extension=mysqli These lines contain a list of PHP extensions that are commented out by default (the ; at the beginning of the line indicates a comment). To enable an extension, remove the semicolon (;) at the beginning of the line. For example, to enable the GD and MySQLi extensions, your php.ini file should look like this: extension=gd extension=mysqli Save the php.ini file after making these changes. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:4:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#step-3-enable-php-modules"},{"categories":["Development"],"collections":null,"content":"Step 4: Restart the Web Server If you are using a web server like Apache or Nginx, you will need to restart the web server to apply the changes. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:5:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#step-4-restart-the-web-server"},{"categories":["Development"],"collections":null,"content":"Step 5: Verify PHP Installation To verify that PHP is installed and the modules are enabled, create a PHP file with the following content: \u0026lt;?php phpinfo(); ?\u0026gt; Save this file with a .php extension (e.g., phpinfo.php) in your web server\u0026rsquo;s document root directory. Access this file through your web browser (e.g., http://localhost/phpinfo.php), and you should see a page displaying detailed information about your PHP installation, including the enabled modules. That\u0026rsquo;s it! You have successfully installed PHP on Windows using Chocolatey and enabled the necessary modules to run PHP applications like Laravel. You can now proceed to set up and configure your PHP application as needed. ","date":"12-03-2017","objectID":"/posts/development/how-to-install-php-on-windows-and-enable-modules/:6:0","tags":null,"title":"How to Install PHP on Windows and Enable Modules","uri":"/posts/development/how-to-install-php-on-windows-and-enable-modules/#step-5-verify-php-installation"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"If you\u0026rsquo;re encountering an issue where Maven cannot detect the Google Cloud SDK on Eclipse, there are a few solutions you can try to resolve the problem. In this blog post, we will explore three different solutions that you can use to fix this issue. ","date":"25-02-2017","objectID":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/:0:0","tags":["maven","java","gcp"],"title":"Maven Could Not Detect Google Cloud SDK on Eclipse","uri":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Solution 1: Use Automator Script The first solution involves using an Automator script to set the GOOGLE_CLOUD_SDK_HOME environment variable before launching Eclipse. Follow the steps below: Open a text editor and create a new file. Add the following line to the file: GOOGLE_CLOUD_SDK_HOME=/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/ open Eclipse.app Save the file with a \u0026ldquo;.sh\u0026rdquo; extension, for example, \u0026ldquo;eclipse_startup.sh\u0026rdquo;. Open Terminal and navigate to the location where you saved the script. Make the script executable by running the following command: chmod +x eclipse_startup.sh Finally, run the script by executing the following command: ./eclipse_startup.sh This will set the GOOGLE_CLOUD_SDK_HOME environment variable and launch Eclipse with the correct configuration. ","date":"25-02-2017","objectID":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/:1:0","tags":["maven","java","gcp"],"title":"Maven Could Not Detect Google Cloud SDK on Eclipse","uri":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/#solution-1-use-automator-script"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Solution 2: Run from Shell The second solution is a simpler approach that involves directly running Eclipse from the shell. Follow the steps below: Open Terminal. Execute the following command: open Eclipse.app This will launch Eclipse with the default environment variables set in your shell. ","date":"25-02-2017","objectID":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/:2:0","tags":["maven","java","gcp"],"title":"Maven Could Not Detect Google Cloud SDK on Eclipse","uri":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/#solution-2-run-from-shell"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Solution 3: Add Google Cloud SDK Path to GUI Environment Variable The third solution involves adding the Google Cloud SDK path to the GUI environment variable using a launch agent. Follow the steps below: Open Terminal. Execute the following command to create a new launch agent file: vim ~/Library/LaunchAgents/my.startup.plist In the text editor that opens, paste the following XML code: \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;UTF-8\u0026#34;?\u0026gt; \u0026lt;!DOCTYPE plist PUBLIC \u0026#34;-//Apple//DTD PLIST 1.0//EN\u0026#34; \u0026#34;http://www.apple.com/DTDs/PropertyList-1.0.dtd\u0026#34;\u0026gt; \u0026lt;plist version=\u0026#34;1.0\u0026#34;\u0026gt; \u0026lt;dict\u0026gt; \u0026lt;key\u0026gt;Label\u0026lt;/key\u0026gt; \u0026lt;string\u0026gt;my.startup\u0026lt;/string\u0026gt; \u0026lt;key\u0026gt;ProgramArguments\u0026lt;/key\u0026gt; \u0026lt;array\u0026gt; \u0026lt;string\u0026gt;sh\u0026lt;/string\u0026gt; \u0026lt;string\u0026gt;-c\u0026lt;/string\u0026gt; \u0026lt;string\u0026gt;launchctl setenv GOOGLE_CLOUD_SDK_HOME /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/\u0026lt;/string\u0026gt; \u0026lt;/array\u0026gt; \u0026lt;key\u0026gt;RunAtLoad\u0026lt;/key\u0026gt; \u0026lt;true/\u0026gt; \u0026lt;/dict\u0026gt; \u0026lt;/plist\u0026gt; Save the file and exit the text editor. To load the launch agent, execute the following command: launchctl load ~/Library/LaunchAgents/my.startup.plistThis will set the GOOGLE_CLOUD_SDK_HOME environment variable when your GUI session starts. With these solutions, you should be able to resolve the issue of Maven not detecting the Google Cloud SDK on Eclipse. Choose the solution that best suits your needs and configuration. ","date":"25-02-2017","objectID":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/:3:0","tags":["maven","java","gcp"],"title":"Maven Could Not Detect Google Cloud SDK on Eclipse","uri":"/posts/development/maven-could-not-detect-google-cloud-sdk-on-eclipse/#solution-3-add-google-cloud-sdk-path-to-gui-environment-variable"},{"categories":["Development"],"collections":null,"content":"In previous releases of macOS (Mavericks, Mountain Lion, Lion, \u0026hellip;), configuring environment variables required editing the /etc/launchd.conf file. However, starting from macOS Yosemite, this method is no longer effective. To successfully configure environment variables on Yosemite and later versions, follow these steps: Create a Launch Agent Property List (plist) File: Create a plist file named my.startup.plist in the ~/Library/LaunchAgents/ directory. This directory is specific to user-based launch agent configurations. Define the Plist Structure: The content of the my.startup.plist file should adhere to the XML-based property list format. This format is used to specify various attributes that dictate the behavior of launch agents. Configure Plist Keys and Values: Use the \u0026lt;key\u0026gt; and \u0026lt;string\u0026gt; elements to define keys and string values within the plist. Utilize the Label key to assign a unique identifier to the launch agent, such as my.startup. Under the ProgramArguments key, provide an array of strings representing the shell command to be executed. The first string, sh, indicates that the subsequent command should be executed by the shell. The second string, -c, signals that a command will follow. The third string contains the actual command responsible for setting the environment variable using the launchctl setenv command. Set Environment Variables: Use the launchctl setenv command to set environment variables. This command requires two arguments: the name of the variable ($VARIABLE_NAME) and its corresponding value ($VARIABLE_VALUE). The shell will execute this command when the Launch Agent is loaded. Enable Execution at Login: Set the RunAtLoad key to \u0026lt;true/\u0026gt; within the plist. This ensures that the Launch Agent is executed when the user logs in. Please keep in mind that environment variable management methods on macOS have evolved over time. While the provided plist configuration is suitable for macOS Yosemite, newer versions may favor different approaches. Always exercise caution when modifying system configurations to prevent unintended consequences. ","date":"25-02-2017","objectID":"/posts/development/setting-up-environment-variables-on-macos-yosemite-using-launch-agent/:0:0","tags":null,"title":"Setting Up Environment Variables on macOS Yosemite Using Launch Agent","uri":"/posts/development/setting-up-environment-variables-on-macos-yosemite-using-launch-agent/#"},{"categories":["Development"],"collections":null,"content":"When working with Docker, it\u0026rsquo;s essential to clean up unused containers, images, and other resources to free up disk space and keep your system tidy. Here are three options for cleaning up Docker resources: ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:0:0","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#"},{"categories":["Development"],"collections":null,"content":"Option 1: Using docker-clean You can use a third-party tool called docker-clean to help you clean up Docker resources more efficiently. This tool provides a simple command to remove stopped containers, dangling volumes, and unused images. To use docker-clean, follow these steps: Install docker-clean if you haven\u0026rsquo;t already: docker pull zzrot/docker-clean Run the docker-clean container to clean up Docker resources: docker run --rm -v /var/run/docker.sock:/var/run/docker.sock zzrot/docker-clean This command will remove stopped containers, volumes without containers, and images with no containers. ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:1:0","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#option-1-using-docker-clean"},{"categories":["Development"],"collections":null,"content":"Option 2: Manual Cleanup If you prefer not to use third-party tools, you can manually clean up Docker resources using a series of Docker commands: ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:2:0","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#option-2-manual-cleanup"},{"categories":["Development"],"collections":null,"content":"Remove Containers with Exited Status To remove containers with an exited status, you can use the following commands: docker rm $(docker ps -q -f status=exited) docker rm $(docker ps -q -f status=created) These commands will remove containers that are in either the \u0026ldquo;exited\u0026rdquo; or \u0026ldquo;created\u0026rdquo; status. ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:2:1","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#remove-containers-with-exited-status"},{"categories":["Development"],"collections":null,"content":"Remove Unused Images To remove unused (dangling) images, you can use the following command: docker rmi $(docker images --filter \u0026#34;dangling=true\u0026#34; -q --no-trunc) This command will delete images that are not associated with any containers. ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:2:2","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#remove-unused-images"},{"categories":["Development"],"collections":null,"content":"Option 3: Using docker system prune (Advanced) The docker system prune command is a built-in Docker command for cleaning up various types of unused data, including containers stopped, volumes without containers, and images with no containers. To use docker system prune, simply run the following command: docker system prune This command will interactively prompt you to confirm the cleanup before proceeding. Note: Be cautious when using the docker system prune command, as it will remove more data than just stopped containers and unused images. It will also remove other unused data like networks and build cache. Choose the option that best suits your needs for cleaning up Docker resources based on your preference and requirements. ","date":"22-02-2017","objectID":"/posts/development/docker-command-remove-clean-up/:3:0","tags":null,"title":"Docker Command Remove / Clean Up","uri":"/posts/development/docker-command-remove-clean-up/#option-3-using-docker-system-prune-advanced"},{"categories":["Development"],"collections":null,"content":"When you\u0026rsquo;re dealing with a setup where a client communicates with an SSL load balancer over HTTPS, and the load balancer talks to a backend server over HTTP, you might encounter issues with Laravel generating URLs with an http:// schema. To address this issue, you can implement the following workaround: ","date":"20-02-2017","objectID":"/posts/development/solving-laravel-https-to-http-proxy-issue/:0:0","tags":null,"title":"Solving Laravel HTTPS to HTTP Proxy Issue","uri":"/posts/development/solving-laravel-https-to-http-proxy-issue/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Modify routes.php Open your Laravel project\u0026rsquo;s routes.php file and add the following code snippet at the top of the file: $proxy_url = getenv(\u0026#39;PROXY_URL\u0026#39;); $proxy_schema = getenv(\u0026#39;PROXY_SCHEMA\u0026#39;); if (!empty($proxy_url)) { URL::forceRootUrl($proxy_url); } if (!empty($proxy_schema)) { URL::forceScheme($proxy_schema); } This code sets the root URL and schema for Laravel\u0026rsquo;s URL generation based on environment variables. ","date":"20-02-2017","objectID":"/posts/development/solving-laravel-https-to-http-proxy-issue/:1:0","tags":null,"title":"Solving Laravel HTTPS to HTTP Proxy Issue","uri":"/posts/development/solving-laravel-https-to-http-proxy-issue/#step-1-modify-routesphp"},{"categories":["Development"],"collections":null,"content":"Step 2: Update the .env File Next, you\u0026rsquo;ll need to update your Laravel project\u0026rsquo;s .env file to include the PROXY_URL and, if necessary, the PROXY_SCHEMA. Add these lines to the .env file: PROXY_URL=http://igateway.somedomain.com PROXY_SCHEMA=https Replace http://igateway.somedomain.com with the actual URL of your backend server and adjust the schema (https or http) as needed. ","date":"20-02-2017","objectID":"/posts/development/solving-laravel-https-to-http-proxy-issue/:2:0","tags":null,"title":"Solving Laravel HTTPS to HTTP Proxy Issue","uri":"/posts/development/solving-laravel-https-to-http-proxy-issue/#step-2-update-the-env-file"},{"categories":["Development"],"collections":null,"content":"Explanation Here\u0026rsquo;s how this workaround works: In the routes.php file, you check for the presence of PROXY_URL and PROXY_SCHEMA environment variables. If PROXY_URL is set, you force Laravel to use this URL as the root URL for generating URLs. This ensures that URLs generated by Laravel use the correct base URL. If PROXY_SCHEMA is set, you force Laravel to use this schema (either http or https) for generating URLs. This ensures that URLs are generated with the appropriate schema. By following these steps, you can configure Laravel to generate URLs correctly, even in a setup where there\u0026rsquo;s a proxy between the client and the backend server with different schemas (HTTPS to HTTP). This workaround ensures that your application generates URLs with the schema and root URL you specify in the .env file, making it compatible with your proxy setup. ","date":"20-02-2017","objectID":"/posts/development/solving-laravel-https-to-http-proxy-issue/:3:0","tags":null,"title":"Solving Laravel HTTPS to HTTP Proxy Issue","uri":"/posts/development/solving-laravel-https-to-http-proxy-issue/#explanation"},{"categories":["Development"],"collections":null,"content":"When working with Docker, you often use various command-line arguments to customize the behavior of containers when they are launched. Three commonly used arguments are -i, -t, and --attach. These arguments are often used together, and they serve different purposes in controlling how your container interacts with the terminal and user input. ","date":"20-02-2017","objectID":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/:0:0","tags":null,"title":"Understanding Docker Run Arguments: -i, -t, and --attach","uri":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/#"},{"categories":["Development"],"collections":null,"content":"-i - Interactive Mode The -i flag stands for \u0026ldquo;interactive.\u0026rdquo; When you include this flag in your docker run command, it tells Docker to keep STDIN (standard input) open, allowing you to interact with the container\u0026rsquo;s command or application. Here\u0026rsquo;s what it means in more detail: Interactive Mode: Docker starts the container in interactive mode, which means that it listens for input from your terminal. You can send input to the container and receive output from it in real-time. Termination Behavior: If you quit or close the terminal session where the container is running (e.g., by pressing Ctrl+C or typing exit), Docker will terminate the container. This behavior is because the container is tied to the terminal\u0026rsquo;s lifecycle. When the terminal ends, so does the container. Example: docker run -i ubuntu In this example, you start an interactive Ubuntu container, and you can run commands and interact with it in the terminal. If you exit the terminal, the container will be terminated. ","date":"20-02-2017","objectID":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/:1:0","tags":null,"title":"Understanding Docker Run Arguments: -i, -t, and --attach","uri":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/#-i---interactive-mode"},{"categories":["Development"],"collections":null,"content":"-t - Allocate a Pseudo-Terminal (TTY) The -t flag stands for \u0026ldquo;allocate a pseudo-TTY.\u0026rdquo; It is used to allocate a terminal session for the container. When you include this flag, Docker provides a terminal-like interface for your container\u0026rsquo;s command or application. Here\u0026rsquo;s what it means: Pseudo-Terminal (TTY): The -t flag allocates a pseudo-terminal (TTY) for the container. This makes the container\u0026rsquo;s output more readable and allows you to see the formatting correctly. Control Characters: Without the -t flag, control characters like Ctrl+C might not work as expected inside the container. The -t flag ensures that these control characters are captured correctly. Example: docker run -it ubuntu In this example, you start an interactive Ubuntu container with a pseudo-TTY, making it easier to work with the container\u0026rsquo;s command line. ","date":"20-02-2017","objectID":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/:2:0","tags":null,"title":"Understanding Docker Run Arguments: -i, -t, and --attach","uri":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/#-t---allocate-a-pseudo-terminal-tty"},{"categories":["Development"],"collections":null,"content":"--attach - Attach to STDIN, STDOUT, and STDERR The --attach flag is used to attach to the standard input (STDIN), standard output (STDOUT), and standard error (STDERR) streams of a running container. When you include this flag, you can interact with the container\u0026rsquo;s input and output streams. Attach Mode: --attach attaches your terminal to the container\u0026rsquo;s input and output streams, allowing you to send input to the container and receive its output. Use Cases: This flag is handy when you want to connect to a running container and interact with its shell or running process. Example: docker attach \u0026lt;container_id\u0026gt; In this example, you use docker attach to attach to a running container by specifying its container ID. To summarize, when using Docker, the -i, -t, and --attach flags are often used together to create an interactive and TTY-enabled container with the ability to attach to its terminal. This combination makes it easier to work with containers and run interactive applications within them. ","date":"20-02-2017","objectID":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/:3:0","tags":null,"title":"Understanding Docker Run Arguments: -i, -t, and --attach","uri":"/posts/development/understanding-docker-run-arguments-i-t-and-attach/#--attach---attach-to-stdin-stdout-and-stderr"},{"categories":["Development"],"collections":null,"content":"Sometimes, when you\u0026rsquo;re working in a terminal on your Mac and using the screen command to manage multiple shell sessions, you may encounter situations where you need to select and copy text from within a screen session. However, the usual text selection methods like click and drag may not work as expected within a screen session. In such cases, you can use the Fn key in combination with the Left Click to force select text within a screen session. Here\u0026rsquo;s a step-by-step guide on how to force select text from a screen session on Terminal Mac using Fn + Left Click: Open Terminal: Launch the Terminal application on your Mac if it\u0026rsquo;s not already open. You can find Terminal in the Applications folder or use Spotlight Search (press Cmd + Space and type \u0026ldquo;Terminal\u0026rdquo;). Start a screen Session: If you haven\u0026rsquo;t already, start a screen session by typing the following command and pressing Enter: screen This will open a new shell session within the screen environment. Force Select Text: Inside your screen session, move the cursor to the beginning of the text you want to select. Hold down the Fn key on your keyboard (usually located at the bottom-left corner). While holding the Fn key, left-click (press and release) the mouse or trackpad button at the start position of the text you want to select. Without releasing the Fn key, move the cursor to the end of the text you want to select. Left-click again (press and release) at the end position of the text. Copy the Selected Text: After successfully selecting the text, release the Fn key. To copy the selected text to the clipboard, simply press Cmd + C. Paste the Copied Text: To paste the copied text, move to the location where you want to paste it and press Cmd + V. That\u0026rsquo;s it! You\u0026rsquo;ve successfully force-selected text from a screen session on Terminal Mac using Fn + Left Click. This method should help you extract and copy text from within a screen session when the regular text selection methods don\u0026rsquo;t work as expected. ","date":"17-02-2017","objectID":"/posts/development/how-to-force-select-text-from-a-screen-session-on-terminal-mac/:0:0","tags":null,"title":"How to Force Select Text from a Screen Session on Terminal Mac","uri":"/posts/development/how-to-force-select-text-from-a-screen-session-on-terminal-mac/#"},{"categories":["Development"],"collections":null,"content":"To make the Google App Engine Local Development Server available on your network using Maven, you need to configure the appengine-maven-plugin with the appropriate host and port settings. By default, the local development server runs on localhost, which means it\u0026rsquo;s only accessible from the same machine where it\u0026rsquo;s running. To make it accessible from other devices on your network, you should set the host to 0.0.0.0 to bind it to all available network interfaces. Here are the steps to achieve this: Open your pom.xml file, which contains your Maven project configuration. Locate the appengine-maven-plugin plugin configuration in your pom.xml. It should look something like this: \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;com.google.cloud.tools\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;appengine-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;...\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;!-- Add the host configuration here --\u0026gt; \u0026lt;host\u0026gt;0.0.0.0\u0026lt;/host\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; Make sure you have the correct version of the appengine-maven-plugin specified. Replace ... with the appropriate version you are using. Save the pom.xml file. Now, when you start the local development server using Maven, it will bind to all network interfaces, making it accessible on your local network. To start the local development server, you can use the following Maven command: mvn appengine:run Once the server is up and running, you should be able to access it from other devices on your network by using the IP address of the machine where the server is running. You can find the IP address of your machine by running the ipconfig command on Windows or the ifconfig command on Linux/macOS. For example, if the server is running on port 8080 and your machine\u0026rsquo;s IP address is 192.168.1.100, you can access it from another device\u0026rsquo;s web browser using http://192.168.1.100:8080. Remember that exposing the development server to your local network may have security implications, so be cautious when doing so and consider any necessary firewall or security configurations. ","date":"17-02-2017","objectID":"/posts/development/make-google-app-engine-local-development-server-available-on-network/:0:0","tags":null,"title":"Make Google App Engine Local Development Server Available on network","uri":"/posts/development/make-google-app-engine-local-development-server-available-on-network/#"},{"categories":["Development"],"collections":null,"content":"In software development, the terms \u0026ldquo;bug\u0026rdquo; and \u0026ldquo;defect\u0026rdquo; are often used interchangeably, but they have distinct meanings that can impact how issues are categorized and addressed. To clarify the difference, let\u0026rsquo;s explore each term and provide examples for better comprehension. ","date":"06-02-2017","objectID":"/posts/development/understanding-the-difference-between-bug-and-defect/:0:0","tags":null,"title":"Understanding the Difference Between Bug and Defect","uri":"/posts/development/understanding-the-difference-between-bug-and-defect/#"},{"categories":["Development"],"collections":null,"content":"Bug A bug is a problem or issue in a software application that occurs as a result of a coding error. It represents an unintended behavior that arises from mistakes made during the implementation phase of development. Bugs can manifest as crashes, data corruption, unexpected behaviors, or any issue where the software does not perform as intended due to a coding mistake. Example (Bug): Suppose a developer is tasked with creating a calculator application, and they inadvertently code the addition function to subtract numbers instead. Users expect the application to add numbers, but it subtracts them due to a coding error. This is a bug because it\u0026rsquo;s a deviation from the intended behavior caused by a coding mistake. ","date":"06-02-2017","objectID":"/posts/development/understanding-the-difference-between-bug-and-defect/:1:0","tags":null,"title":"Understanding the Difference Between Bug and Defect","uri":"/posts/development/understanding-the-difference-between-bug-and-defect/#bug"},{"categories":["Development"],"collections":null,"content":"Defect A defect, on the other hand, is a deviation from the specified requirements or expected behavior of the software. It occurs when the software does not align with the documented requirements or the client\u0026rsquo;s expectations. Defects are not necessarily caused by coding errors; they can also result from misunderstandings, miscommunications, or incomplete or ambiguous requirements. Example (Defect): Consider a scenario where a client requests a web form that includes a single button labeled \u0026ldquo;Save \u0026amp; Close\u0026rdquo; to both save data and close the window. If the implemented form has separate \u0026ldquo;Save\u0026rdquo; and \u0026ldquo;Close\u0026rdquo; buttons, this is a defect. It\u0026rsquo;s a deviation from what the client specifically requested, even though both buttons perform their individual functions correctly. This defect arises from a misunderstanding or misinterpretation of the requirements. Example (Bug vs. Defect Summary): To summarize the distinction between bugs and defects: A bug is a problem caused by a coding error that results in unintended behavior. A defect is a problem that arises when the software doesn\u0026rsquo;t meet specified requirements or expectations, regardless of whether it\u0026rsquo;s caused by a coding error. In practice, it\u0026rsquo;s essential for development teams to differentiate between bugs and defects to prioritize and address issues effectively. Clear communication and thorough testing can help identify and resolve both bugs and defects, ensuring that the software aligns with the client\u0026rsquo;s needs and functions as intended. ","date":"06-02-2017","objectID":"/posts/development/understanding-the-difference-between-bug-and-defect/:2:0","tags":null,"title":"Understanding the Difference Between Bug and Defect","uri":"/posts/development/understanding-the-difference-between-bug-and-defect/#defect"},{"categories":["Development"],"collections":null,"content":"If you want to remove Git commit history and start fresh with a new branch while keeping your current files, you can follow these steps: 1. **Create a New Orphan Branch:** ```shell git checkout --orphan newBranch This creates a new branch called newBranch with no commit history. Add and Commit Your Current Files: git add -A # Add all files and changes git commit -m \u0026#34;Initial commit\u0026#34; This stages and commits all your current files to the new branch. Delete the Old Master Branch: git branch -D master This deletes the old master branch. Rename the Current Branch to Master: git branch -m newBranch master This renames your current branch (newBranch) to master. Force Push the New Master Branch to GitHub: git push -f origin master This force-pushes the new master branch to GitHub, replacing the old one. Optimize Repository Size: git gc --aggressive --prune=all This command optimizes the Git repository by cleaning up unnecessary files and history. Please be cautious when using `git push -f` as it can rewrite the history on the remote repository. Make sure you understand the implications, especially if others are collaborating on the same repository.","date":"04-02-2017","objectID":"/posts/development/removing-git-history-commit/:0:0","tags":null,"title":"Removing Git History Commit","uri":"/posts/development/removing-git-history-commit/#"},{"categories":["Development"],"collections":null,"content":"I\u0026rsquo;m unable to access external websites or specific URLs, including the one you provided. However, I can certainly help you create a markdown article explaining how to set up L2TP over IPSec and how to port forward the necessary ports. Here\u0026rsquo;s a guide on how to do it: ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:0:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#"},{"categories":["Development"],"collections":null,"content":"Creating an L2TP over IPSec VPN In this guide, we will walk you through the process of setting up your own L2TP (Layer 2 Tunneling Protocol) VPN over IPSec (Internet Protocol Security). This will allow you to establish a secure connection to your home network or server from a remote location. ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:0:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#creating-an-l2tp-over-ipsec-vpn"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following: A device running a compatible operating system (e.g., Windows, macOS, Android, or iOS). Access to your router\u0026rsquo;s configuration settings. Knowledge of your router\u0026rsquo;s internal IP address (usually something like 192.168.1.1). Administrative access to your router (username and password). ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:1:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Enable L2TP over IPSec on your VPN Server Log in to your VPN server or router\u0026rsquo;s web interface using its internal IP address. Navigate to the VPN or Security settings, depending on your router model. Look for an option related to VPN protocols and select L2TP over IPSec. Save your settings and apply the changes. ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:2:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#step-1-enable-l2tp-over-ipsec-on-your-vpn-server"},{"categories":["Development"],"collections":null,"content":"Step 2: Port Forwarding To allow external connections to your VPN server, you need to set up port forwarding for the following ports: L2TP traffic: UDP 1701 Internet Key Exchange (IKE): UDP 500 IPSec Network Address Translation (NAT-T): UDP 4500 Here\u0026rsquo;s how to port forward these ports on most routers: Log in to your router\u0026rsquo;s web interface. Navigate to the Port Forwarding or NAT (Network Address Translation) section. Add a new port forwarding rule for each of the ports mentioned above. For each rule, specify the protocol as UDP. Set the internal IP address to the local IP address of your VPN server. Save and apply the changes. ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:3:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#step-2-port-forwarding"},{"categories":["Development"],"collections":null,"content":"Step 3: Configure VPN Client Now that your VPN server is set up and the necessary ports are forwarded, configure your VPN client device to connect to the server: Open the VPN settings on your client device. Add a new VPN connection. Select \u0026ldquo;L2TP over IPSec\u0026rdquo; as the VPN type. Enter the public IP address of your home network (you can find this by searching \u0026ldquo;What is my IP\u0026rdquo; on Google). Enter your username and password for the VPN server. Save the configuration and connect to the VPN. That\u0026rsquo;s it! You have successfully set up your own L2TP over IPSec VPN. You can now securely access your home network or server from anywhere with an internet connection. Remember to keep your router and VPN server up to date with the latest firmware and security patches for maximum security. ","date":"17-01-2017","objectID":"/posts/development/creating-an-l2tp-over-ipsec-vpn/:4:0","tags":null,"title":"Creating an L2TP over IPSec VPN","uri":"/posts/development/creating-an-l2tp-over-ipsec-vpn/#step-3-configure-vpn-client"},{"categories":["Development"],"collections":null,"content":"We have provided a set of commands for working with GPG keys, encrypting and decrypting files, and extracting tar archives. These commands are useful for various tasks related to data security and file management. Here\u0026rsquo;s a breakdown of each command with a brief explanation: scp example-backup-enc-privkey.asc root@192.168.1.6:~/ gpg --import example-backup-enc-privkey.asc gpg --edit-key john@example.com \u0026gt; trust 5 quit The scp command is used to securely copy the GPG private key file (example-backup-enc-privkey.asc) to another PC with the specified IP address and destination folder (~ denotes the user\u0026rsquo;s home directory). gpg --import is used to import the GPG private key into the GPG keyring on the destination PC. gpg --edit-key john@example.com opens an interactive prompt for editing the key with the specified email address. Inside the interactive prompt, trust sets the trust level of the key, and 5 represents \u0026ldquo;I trust ultimately.\u0026rdquo; This level indicates a high level of trust in the key. ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:0","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#"},{"categories":["Development"],"collections":null,"content":"GPG Encrypt File gpg --encrypt --recipient \u0026#34;Example Backup\u0026#34; test.data This command is used to encrypt a file (test.data) using GPG. The --recipient flag specifies the recipient\u0026rsquo;s name or key ID. In this case, it\u0026rsquo;s \u0026ldquo;Example Backup.\u0026rdquo; ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:1","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#gpg-encrypt-file"},{"categories":["Development"],"collections":null,"content":"GPG Decrypt File gpg --output test.data.output --decrypt test.data.gpg This command decrypts a GPG-encrypted file (test.data.gpg) and saves the decrypted content to a file named test.data.output. Make sure you have the necessary decryption key to perform this operation. ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:2","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#gpg-decrypt-file"},{"categories":["Development"],"collections":null,"content":"GPG Remove Secret Key gpg --list-secret-keys gpg --delete-secret-keys \u0026lt;YOUR-SECRET-KEY\u0026gt; The first command, gpg --list-secret-keys, lists your secret keys. The second command, gpg --delete-secret-keys \u0026lt;YOUR-SECRET-KEY\u0026gt;, deletes the specified secret key from your keyring. Replace \u0026lt;YOUR-SECRET-KEY\u0026gt; with the appropriate key ID. ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:3","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#gpg-remove-secret-key"},{"categories":["Development"],"collections":null,"content":"Extract Tar Archive (Preserving Permissions) cd /path/to/destination/folder tar xpvzf put_your_name_here.tar.gz This set of commands navigates to the destination folder (cd /path/to/destination/folder) and extracts the contents of a tar archive (put_your_name_here.tar.gz) while preserving file permissions and ownership. ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:4","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#extract-tar-archive-preserving-permissions"},{"categories":["Development"],"collections":null,"content":"Extract Tar Archive (Specific Folder) tar -C ./ -xpvzf foo.tar home/foo/bar This command extracts the contents of the foo.tar archive into the current directory (-C ./) and specifically targets the folder home/foo/bar within the archive for extraction. Please note that these commands should be used with caution, especially when dealing with GPG keys, as mishandling them can result in data loss or security issues. Always ensure you have backups and understand the implications of the commands you\u0026rsquo;re using. ","date":"17-01-2017","objectID":"/posts/development/gpg-key-import-to-another-pc/:0:5","tags":null,"title":"GPG Key Import to another PC","uri":"/posts/development/gpg-key-import-to-another-pc/#extract-tar-archive-specific-folder"},{"categories":["Development"],"collections":null,"content":"If you want to establish an SSH connection to your \u0026ldquo;linux server\u0026rdquo; from the outside without configuring port forwarding on your router, you can use SSH remote port forwarding. This technique allows you to connect to an external server (let\u0026rsquo;s call it \u0026ldquo;my_other_server\u0026rdquo;) and have it forward traffic back to your \u0026ldquo;linux server.\u0026rdquo; Here\u0026rsquo;s how you can do it: SSH from linux_server to my_other_server: Open a terminal on your \u0026ldquo;linux server\u0026rdquo; and use the following command to initiate an SSH connection to \u0026ldquo;my_other_server,\u0026rdquo; specifying remote port forwarding: [user@linux_server]$ ssh -R 8022:localhost:22 my_other_server.com Explanation: This command connects to \u0026ldquo;my_other_server\u0026rdquo; and opens port 8022 on that server, which will forward traffic back to your \u0026ldquo;linux_server\u0026rdquo; on port 22. SSH from my_other_server back to linux_server: Now that you have established the remote port forwarding, you can SSH from \u0026ldquo;my_other_server\u0026rdquo; to your \u0026ldquo;linux_server\u0026rdquo; through the established tunnel. On \u0026ldquo;my_other_server,\u0026rdquo; open a terminal and use the following command: [user@my_other_server]$ ssh -p 8022 localhost Explanation: This command connects to \u0026ldquo;my_other_server\u0026rdquo; itself but uses port 8022, which is being forwarded to your \u0026ldquo;linux_server.\u0026rdquo; As a result, your SSH traffic is tunneled back to your \u0026ldquo;linux_server.\u0026rdquo; Handling Connection Stability: If you encounter problems with the initial tunnel dropping out, you can take several measures: Keepalive Settings: Adjust the SSH keepalive settings to ensure the connection stays alive. You can add the following options to your SSH command on \u0026ldquo;my_other_server\u0026rdquo;: [user@my_other_server]$ ssh -o ServerAliveInterval=60 -p 8022 localhost This setting sends a keepalive packet every 60 seconds to maintain the connection. Use autossh: autossh is a tool that helps maintain SSH tunnels. It automatically restarts SSH sessions and keeps tunnels alive even if they disconnect. You can install it and use it in place of regular SSH like this: [user@my_other_server]$ autossh -M 0 -o \u0026#34;ServerAliveInterval 30\u0026#34; -o \u0026#34;ServerAliveCountMax 3\u0026#34; -p 8022 localhost This command uses autossh with specified options to ensure a stable SSH tunnel. By following these steps and considering connection stability measures, you can SSH to your \u0026ldquo;linux server\u0026rdquo; through a router without the need for port forwarding on the router itself. ","date":"30-12-2016","objectID":"/posts/development/ssh-through-a-router-without-port-forwarding/:0:0","tags":null,"title":"SSH Through A Router Without Port Forwarding","uri":"/posts/development/ssh-through-a-router-without-port-forwarding/#"},{"categories":["Development"],"collections":null,"content":"If you have a Gmail account, you can configure your Mail Transfer Agent (MTA) to relay outgoing mail through Gmail. This provides you with the reliability and infrastructure of Gmail for sending emails from the command line. In this tutorial, we\u0026rsquo;ll use Postfix as our MTA, which is a secure and open-source mail transfer agent. We\u0026rsquo;ll cover instructions for various operating systems. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:0:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#"},{"categories":["Development"],"collections":null,"content":"1. Install Required Software ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#1-install-required-software"},{"categories":["Development"],"collections":null,"content":"Debian, Ubuntu: apt-get update \u0026amp;\u0026amp; apt-get install postfix mailutils ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:1","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#debian-ubuntu"},{"categories":["Development"],"collections":null,"content":"Fedora: dnf update \u0026amp;\u0026amp; dnf install postfix mailx ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:2","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#fedora"},{"categories":["Development"],"collections":null,"content":"CentOS: yum update \u0026amp;\u0026amp; yum install postfix mailx cyrus-sasl cyrus-sasl-plain ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:3","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#centos"},{"categories":["Development"],"collections":null,"content":"OpenSUSE: zypper update \u0026amp;\u0026amp; zypper install postfix mailx cyrus-sasl ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:4","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#opensuse"},{"categories":["Development"],"collections":null,"content":"Arch Linux: pacman -Sy postfix mailutils ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:5","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#arch-linux"},{"categories":["Development"],"collections":null,"content":"FreeBSD: Compile Postfix from the ports collection with SASL support: portsnap fetch extract update cd /usr/ports/mail/postfix make config In the configuration dialogs, select SASL support. Then: make install clean Install Mailx from the binary package: pkg install mailx ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:1:6","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#freebsd"},{"categories":["Development"],"collections":null,"content":"2. Configure Gmail Authentication Create or modify a password file that Postfix will use to authenticate with Gmail. Replace username with your Gmail username and password with your Gmail password. If you\u0026rsquo;re using a custom Gmail Apps domain, replace gmail.com with your domain. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:2:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#2-configure-gmail-authentication"},{"categories":["Development"],"collections":null,"content":"Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux: Postfix configuration files are in /etc/postfix. Create or edit the password file: vi /etc/postfix/sasl_passwd Add the line: [smtp.gmail.com]:587 username@gmail.com:passwordSave and make the file accessible only by root: chmod 600 /etc/postfix/sasl_passwd ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:2:1","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#debian-ubuntu-fedora-centos-opensuse-arch-linux"},{"categories":["Development"],"collections":null,"content":"FreeBSD: Postfix configuration files are in /usr/local/etc/postfix. Create or edit the password file: vi /usr/local/etc/postfix/sasl_passwd Add the line: [smtp.gmail.com]:587 username@gmail.com:passwordSave and make the file accessible only by root: chmod 600 /usr/local/etc/postfix/sasl_passwd ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:2:2","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#freebsd-1"},{"categories":["Development"],"collections":null,"content":"3. Configure Postfix There are six parameters to set in the Postfix configuration file main.cf: relayhost: Specifies the mail relay host and port number. smtp_use_tls: Enables (or disables) transport layer security. smtp_sasl_auth_enable: Enables (or disables) SASL authentication. smtp_sasl_security_options: Set to empty to ensure no Gmail-incompatible security options are used. smtp_sasl_password_maps: Specifies the password file to use. smtp_tls_CAfile: Specifies the list of certificate authorities to use when verifying server identity. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:3:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#3-configure-postfix"},{"categories":["Development"],"collections":null,"content":"Debian, Ubuntu, Arch Linux: Edit the main Postfix configuration file: vi /etc/postfix/main.cf Add or modify the following values: relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crtSave and close the file. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:3:1","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#debian-ubuntu-arch-linux"},{"categories":["Development"],"collections":null,"content":"Fedora, CentOS: Edit the main Postfix configuration file: vi /etc/postfix/main.cf Add or modify the following values: relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crtSave and close the file. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:3:2","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#fedora-centos"},{"categories":["Development"],"collections":null,"content":"OpenSUSE: Edit the main Postfix configuration file: vi /etc/postfix/main.cf Add or modify the following values: relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_tls_CAfile = /etc/ssl/ca-bundle.pemSave and close the file. OpenSUSE also requires modifying the Postfix master process configuration file master.cf: vi /etc/postfix/master.cf Uncomment the line that reads: #tlsmgr unix - - n 1000? 1 tlsmgSo it reads: tlsmgr unix - - n 1000? 1 tlsmgSave and close the file. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:3:3","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#opensuse-1"},{"categories":["Development"],"collections":null,"content":"FreeBSD: Edit the main Postfix configuration file: vi /usr/local/etc/postfix/main.cf Add or modify the following values: relayhost = [smtp.gmail.com]:587 smtp_use_tls = yes smtp_sasl_auth_enable = yes smtp_sasl_security_options = smtp_sasl_password_maps = hash:/usr/local/etc/postfix/sasl_passwd smtp_tls_CAfile = /etc/mail/certs/cacert.pemSave and close the file. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:3:4","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#freebsd-2"},{"categories":["Development"],"collections":null,"content":"4. Process Password File Use postmap to compile and hash the contents of sasl_passwd. The results will be stored in your Postfix configuration directory in the file sasl_passwd.db. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:4:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#4-process-password-file"},{"categories":["Development"],"collections":null,"content":"Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux: postmap /etc/postfix/sasl_passwd ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:4:1","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#debian-ubuntu-fedora-centos-opensuse-arch-linux-1"},{"categories":["Development"],"collections":null,"content":"FreeBSD: postmap /usr/local/etc/postfix/sasl_passwd ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:4:2","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#freebsd-3"},{"categories":["Development"],"collections":null,"content":"5. Restart Postfix Restart the Postfix service to apply your changes. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:5:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#5-restart-postfix"},{"categories":["Development"],"collections":null,"content":"Debian, Ubuntu, Fedora, CentOS, OpenSUSE, Arch Linux: systemctl restart postfix.service ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:5:1","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#debian-ubuntu-fedora-centos-opensuse-arch-linux-2"},{"categories":["Development"],"collections":null,"content":"FreeBSD: To start the Postfix service for this session: service postfix onestart To start Postfix automatically at system initialization, edit /etc/rc.conf: vi /etc/rc.conf Add the line: postfix_enable=YESSave and close the file, then run: service postfix start ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:5:2","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#freebsd-4"},{"categories":["Development"],"collections":null,"content":"6. Enable \u0026ldquo;Less Secure Apps\u0026rdquo; in Gmail By default, Gmail allows only the most secure sign-ins. To permit relay requests, log in to your Gmail account and turn on \u0026ldquo;Allow less secure apps.\u0026rdquo; Review the Google Support document \u0026ldquo;Allowing less secure apps to access your account\u0026rdquo; for more information. ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:6:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#6-enable-less-secure-apps-in-gmail"},{"categories":["Development"],"collections":null,"content":"7. Send a Test Email Test your new configuration by sending an email using the mail command. Run: mail ","date":"20-07-2016","objectID":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/:7:0","tags":null,"title":"Configure Postfix to use Gmail as a Mail Relay","uri":"/posts/development/configure-postfix-to-use-gmail-as-a-mail-relay/#7-send-a-test-email"},{"categories":["Development"],"collections":null,"content":"You want to change the home directory for users in Cygwin to their Windows profile directory. You\u0026rsquo;ve already identified the relevant configuration file, /etc/nsswitch.conf, and the db_home parameter. Here\u0026rsquo;s how you can change the home directory to the Windows profile directory using this configuration: Open /etc/nsswitch.conf: Use your preferred text editor (e.g., nano, vim, or notepad) to open the /etc/nsswitch.conf file in Cygwin. Edit the db_home parameter: Locate the line that starts with db_home:. You\u0026rsquo;ve already made the necessary change, but make sure it looks like this: db_home: /%H/This change specifies that the home directory should be located in the Windows profile directory, which is represented by %H. Save and Exit: If you\u0026rsquo;re using a text editor within Cygwin, save your changes and exit the editor. Restart Cygwin: To apply the changes, you\u0026rsquo;ll need to restart any Cygwin processes. You can do this by closing and reopening your Cygwin terminal or by restarting your computer. After making these changes and restarting Cygwin, the home directory for users should be set to their respective Windows profile directories. Please note that modifying system configuration files can have consequences, so make sure you have a backup or a recovery plan in case anything goes wrong. Also, ensure you have the necessary permissions to edit system files. ","date":"10-07-2016","objectID":"/posts/development/change-home-directory-to-user-profile-directory-cygwin/:0:0","tags":null,"title":"Change HOME Directory to User Profile Directory CYGWIN","uri":"/posts/development/change-home-directory-to-user-profile-directory-cygwin/#"},{"categories":["Development"],"collections":null,"content":" Locales (language settings) can be configured for Ubuntu from the command line. This guide is applicable to Ubuntu 11.10 and provides steps for displaying current settings, available locales, and adjusting locales as needed. ","date":"22-06-2016","objectID":"/posts/development/configure-locales-in-ubuntu/:0:0","tags":null,"title":"Configure Locales in Ubuntu","uri":"/posts/development/configure-locales-in-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Displaying the Current Settings You can check the current locale settings using the locale command: $ locale LANG=en_US.UTF-8 LANGUAGE= LC_CTYPE=\u0026#34;en_US.UTF-8\u0026#34; LC_NUMERIC=\u0026#34;en_US.UTF-8\u0026#34; LC_TIME=\u0026#34;en_US.UTF-8\u0026#34; LC_COLLATE=\u0026#34;en_US.UTF-8\u0026#34; LC_MONETARY=\u0026#34;en_US.UTF-8\u0026#34; LC_MESSAGES=\u0026#34;en_US.UTF-8\u0026#34; LC_PAPER=\u0026#34;en_US.UTF-8\u0026#34; LC_NAME=\u0026#34;en_US.UTF-8\u0026#34; LC_ADDRESS=\u0026#34;en_US.UTF-8\u0026#34; LC_TELEPHONE=\u0026#34;en_US.UTF-8\u0026#34; LC_MEASUREMENT=\u0026#34;en_US.UTF-8\u0026#34; LC_IDENTIFICATION=\u0026#34;en_US.UTF-8\u0026#34; LC_ALL= This output displays the current locale settings for various aspects of the system. ","date":"22-06-2016","objectID":"/posts/development/configure-locales-in-ubuntu/:1:0","tags":null,"title":"Configure Locales in Ubuntu","uri":"/posts/development/configure-locales-in-ubuntu/#displaying-the-current-settings"},{"categories":["Development"],"collections":null,"content":"Displaying the Available Locales To see a list of available locales, use the locale -a command: $ locale -a C C.UTF-8 de_AT.utf8 de_BE.utf8 de_CH.utf8 de_DE.utf8 de_LI.utf8 de_LU.utf8 en_AG en_AG.utf8 ... POSIX If a required locale doesn\u0026rsquo;t appear in the list, you may need to install it. For example, to generate the fr_FR.UTF-8 locale, you can use the locale-gen command: # locale-gen fr_FR.UTF-8 Generating locales... fr_FR.UTF-8... done Generation complete. ","date":"22-06-2016","objectID":"/posts/development/configure-locales-in-ubuntu/:2:0","tags":null,"title":"Configure Locales in Ubuntu","uri":"/posts/development/configure-locales-in-ubuntu/#displaying-the-available-locales"},{"categories":["Development"],"collections":null,"content":"Adjusting Locales Locale settings are stored in the /etc/default/locale file. You can view the current settings using the cat command: $ cat /etc/default/locale LANG=en_US.UTF-8 To manually adjust these settings, you can edit the /etc/default/locale file. Alternatively, you can use the update-locale tool. For example, to set the system\u0026rsquo;s language to German (de_DE.UTF-8), you can use: # update-locale LANG=de_DE.UTF-8 This tool is particularly useful when you want the system to operate in one language (e.g., German) but display error and system messages in another language (e.g., English). To achieve this, you can modify /etc/default/locale or /etc/environment like this: /etc/default/locale: LANG=de_DE.UTF-8 LC_MESSAGES=POSIX /etc/environment: LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 LC_TYPE=en_US.UTF-8 By configuring locales in Ubuntu, you can customize the language settings to suit your preferences and requirements. ","date":"22-06-2016","objectID":"/posts/development/configure-locales-in-ubuntu/:3:0","tags":null,"title":"Configure Locales in Ubuntu","uri":"/posts/development/configure-locales-in-ubuntu/#adjusting-locales"},{"categories":["Development"],"collections":null,"content":"In this article, we will guide you through the process of installing VirtualBox as a Windows service using NSSM (Non-Sucking Service Manager) and starting a Virtual Machine automatically as a service. This can be useful for scenarios where you need a Virtual Machine to run in the background, even when the user is not logged in. ","date":"22-06-2016","objectID":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/:0:0","tags":null,"title":"Installing VirtualBox as a Service on Windows using NSSM","uri":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before we begin, ensure that you have the following prerequisites in place: VirtualBox: You should have VirtualBox installed on your Windows machine. You can download it from the official VirtualBox website. NSSM (Non-Sucking Service Manager): NSSM is a tool that allows you to create and manage Windows services. We recommend installing it via Chocolatey (Choco), a popular package manager for Windows. If you haven\u0026rsquo;t installed Choco, you can download and install it from the Chocolatey website. To install NSSM using Chocolatey, open a Command Prompt with administrator privileges and run the following command: choco install nssm ","date":"22-06-2016","objectID":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/:1:0","tags":null,"title":"Installing VirtualBox as a Service on Windows using NSSM","uri":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Installing VirtualBox as a Service Now, let\u0026rsquo;s proceed with installing VirtualBox as a Windows service: Open Command Prompt as Administrator: Right-click on the Command Prompt icon and select \u0026ldquo;Run as administrator\u0026rdquo; to ensure you have the necessary privileges. Install VirtualBox as a Service: Run the following NSSM command to install VirtualBox as a service. Replace \u0026lt;VIRTUAL-MACHINE_NAME\u0026gt; with the name of your Virtual Machine: nssm install VirtualBoxVM Configure Service Parameters: Application Path: Set the path to the VBoxHeadless.exe executable. By default, it\u0026rsquo;s located in \u0026ldquo;C:\\Program Files\\Oracle\\VirtualBox\\VBoxHeadless.exe.\u0026rdquo; Path: C:\\Program Files\\Oracle\\VirtualBox\\VBoxHeadless.exe Arguments: Specify the -startvm parameter followed by the name of your Virtual Machine. Arguments: -startvm \u0026lt;VIRTUAL-MACHINE_NAME\u0026gt; Save Configuration: Click on the \u0026ldquo;Install service\u0026rdquo; button to save the configuration. You should see a message indicating that the service has been installed. Start the Service: You can start the service immediately using the following command: net start VirtualBoxVM This command will start your Virtual Machine as a service. ","date":"22-06-2016","objectID":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/:2:0","tags":null,"title":"Installing VirtualBox as a Service on Windows using NSSM","uri":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/#installing-virtualbox-as-a-service"},{"categories":["Development"],"collections":null,"content":"Managing the VirtualBox Service You\u0026rsquo;ve successfully installed VirtualBox as a Windows service using NSSM. To manage the service, you can use standard Windows service management tools or commands, such as: To start the service: net start VirtualBoxVM To stop the service: net stop VirtualBoxVM To check the status of the service: sc query VirtualBoxVM Now, your Virtual Machine will run as a service, allowing you to use it without the need for manual intervention, even when you are not logged into the system. ","date":"22-06-2016","objectID":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/:3:0","tags":null,"title":"Installing VirtualBox as a Service on Windows using NSSM","uri":"/posts/development/installing-virtualbox-as-a-service-on-windows-using-nssm/#managing-the-virtualbox-service"},{"categories":["Development"],"collections":null,"content":"In this article, we will walk you through the process of setting up an HTTPS proxy using Apache. An HTTPS proxy can be useful for various purposes, such as load balancing, reverse proxying, or providing an additional layer of security for your web applications. To configure Apache as an HTTPS proxy, you will need to make use of the mod_proxy and mod_proxy_http modules, along with the SSL-related settings. Below are the steps to configure Apache as an HTTPS proxy with the provided SSL settings: ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:0:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have Apache installed on your server. You should also have an SSL certificate and key ready for the domain you want to proxy. ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:1:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Enable the Required Apache Modules First, you need to enable the necessary Apache modules. Open your terminal and run the following commands: sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod ssl These commands will enable the mod_proxy, mod_proxy_http, and mod_ssl modules. ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:2:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#step-1-enable-the-required-apache-modules"},{"categories":["Development"],"collections":null,"content":"Step 2: Create a Virtual Host Configuration Create a new Apache virtual host configuration file for your proxy. You can do this by creating a new .conf file in the /etc/apache2/sites-available/ directory. Replace example.com with your domain or subdomain: sudo nano /etc/apache2/sites-available/proxy-example.com.conf Add the following configuration to your virtual host file, adjusting it as needed: \u0026lt;VirtualHost *:443\u0026gt; ServerName example.com SSLEngine on SSLCertificateFile /path/to/your/certificate.crt SSLCertificateKeyFile /path/to/your/private-key.key SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off ProxyPass / http://your-backend-server/ ProxyPassReverse / http://your-backend-server/ ErrorLog ${APACHE_LOG_DIR}/proxy-error.log CustomLog ${APACHE_LOG_DIR}/proxy-access.log combined \u0026lt;/VirtualHost\u0026gt; Make sure to replace /path/to/your/certificate.crt and /path/to/your/private-key.key with the actual paths to your SSL certificate and private key. Also, replace example.com with your domain and http://your-backend-server/ with the URL of your backend server. ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:3:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#step-2-create-a-virtual-host-configuration"},{"categories":["Development"],"collections":null,"content":"Step 3: Enable the Virtual Host Enable the virtual host configuration you just created: sudo a2ensite proxy-example.com ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:4:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#step-3-enable-the-virtual-host"},{"categories":["Development"],"collections":null,"content":"Step 4: Restart Apache Finally, restart Apache to apply the changes: sudo systemctl restart apache2 Your Apache server should now be configured as an HTTPS proxy with the specified SSL settings. Requests to https://example.com will be proxied to your backend server with SSLProxy settings applied. Remember to secure your server and manage your SSL certificates properly to ensure the security of your proxy setup. That\u0026rsquo;s it! You\u0026rsquo;ve successfully set up an HTTPS proxy with Apache using the provided SSL settings. ","date":"22-06-2016","objectID":"/posts/development/setting-up-an-https-proxy-with-apache/:5:0","tags":null,"title":"Setting up an HTTPS Proxy with Apache","uri":"/posts/development/setting-up-an-https-proxy-with-apache/#step-4-restart-apache"},{"categories":["Development"],"collections":null,"content":"Samba is a widely-used software suite that enables file and printer sharing between Windows and Linux systems. The primary configuration file for Samba is typically located at /etc/samba/smb.conf. This article will guide you through setting up a Samba share on Linux, utilizing the provided information. ","date":"08-06-2016","objectID":"/posts/development/configuring-samba-share-in-linux/:0:0","tags":null,"title":"Configuring Samba Share in Linux","uri":"/posts/development/configuring-samba-share-in-linux/#"},{"categories":["Development"],"collections":null,"content":"Samba Configuration Path The main configuration file for Samba is usually located at /etc/samba/smb.conf. This file contains various settings that control the behavior of Samba, including share definitions, authentication settings, and access controls. ","date":"08-06-2016","objectID":"/posts/development/configuring-samba-share-in-linux/:0:1","tags":null,"title":"Configuring Samba Share in Linux","uri":"/posts/development/configuring-samba-share-in-linux/#samba-configuration-path"},{"categories":["Development"],"collections":null,"content":"Creating a Samba Share To create a Samba share, you need to add a section to the smb.conf file that defines the share\u0026rsquo;s properties. Based on the provided information, you want to create a share named \u0026ldquo;admin\u0026rdquo; with specific access controls. Here\u0026rsquo;s an example of how you could define the \u0026ldquo;admin\u0026rdquo; share in the smb.conf file: [admin] path = /var/lib/samba/usershares read only = no guest ok = no admin users = @usershare_acl Let\u0026rsquo;s break down the options used in this configuration: [admin]: This is the name of the share that clients will use to access the shared folder. path: This is the path to the shared folder on the file system. read only: This option determines whether clients can only read files from the share (yes) or also write to it (no). guest ok: This option specifies whether guest access is allowed (yes) or not (no). admin users: This option specifies a list of users or groups who have administrative privileges on the share. ","date":"08-06-2016","objectID":"/posts/development/configuring-samba-share-in-linux/:0:2","tags":null,"title":"Configuring Samba Share in Linux","uri":"/posts/development/configuring-samba-share-in-linux/#creating-a-samba-share"},{"categories":["Development"],"collections":null,"content":"User ACL (Access Control List) The admin usershare_acl value S-1-1-0:R,S-1-22-1-1000:F appears to represent user access control. However, without further context, it\u0026rsquo;s challenging to provide a detailed explanation. The format seems similar to a user SID (Security Identifier) followed by a permission indicator. For instance: S-1-1-0:R: This could mean that the user with SID S-1-1-0 has read (R) permission. S-1-22-1-1000:F: This could indicate that the user with SID S-1-22-1-1000 has full (F) permission. Please ensure that these SIDs correspond to valid users or groups on your system and that you understand the implications of the permissions being granted. ","date":"08-06-2016","objectID":"/posts/development/configuring-samba-share-in-linux/:0:3","tags":null,"title":"Configuring Samba Share in Linux","uri":"/posts/development/configuring-samba-share-in-linux/#user-acl-access-control-list"},{"categories":["Development"],"collections":null,"content":"Conclusion Configuring Samba shares involves defining sections in the smb.conf file that specify the shared folder\u0026rsquo;s properties and access controls. The provided information gives a glimpse into how a Samba share named \u0026ldquo;admin\u0026rdquo; could be set up with specific user access controls. Make sure to tailor the configuration to your specific needs and ensure that the user and group SIDs are correctly set up on your system. ","date":"08-06-2016","objectID":"/posts/development/configuring-samba-share-in-linux/:0:4","tags":null,"title":"Configuring Samba Share in Linux","uri":"/posts/development/configuring-samba-share-in-linux/#conclusion"},{"categories":["Development"],"collections":null,"content":"In Docker, it\u0026rsquo;s common to want non-root users to execute Docker commands without needing to use sudo each time. This is achieved by adding the user to the docker group. Here are the steps to do that: Open a Terminal: First, open a terminal on your Linux system. Check If the docker Group Exists: Run the following command to check if the docker group already exists: cat /etc/group | grep docker If it doesn\u0026rsquo;t exist, you will not see any output from this command. Create the docker Group (if necessary): If the docker group doesn\u0026rsquo;t exist, you can create it using the following command: sudo groupadd docker Add the User to the docker Group: To add the user john to the docker group, use the usermod command with the -aG option: sudo usermod -aG docker john This command appends (-a) the user john to the docker group (-G). Verify the User\u0026rsquo;s Group Membership: To confirm that the user john has been added to the docker group, you can use the id command: id john You should see docker listed among the user\u0026rsquo;s groups. Log Out and Log Back In: For the changes to take effect, it\u0026rsquo;s recommended to log out and log back in as the user john. This ensures that the group membership is updated. Test Docker Access: After logging back in, you can test if john can run Docker commands without sudo. For example: docker --version If you see the Docker version information without any permission errors, then john now has the necessary permissions to use Docker without sudo. Remember that allowing a user to run Docker commands without sudo means they have significant control over the system, so be cautious when granting this privilege. It\u0026rsquo;s essential to trust the user and follow best security practices when managing Docker access. ","date":"07-06-2016","objectID":"/posts/development/adding-a-non-root-user-to-execute-docker-commands/:0:0","tags":null,"title":"Adding a Non-Root User to Execute Docker Commands","uri":"/posts/development/adding-a-non-root-user-to-execute-docker-commands/#"},{"categories":["Development"],"collections":null,"content":"If you are encountering an issue with Chrome\u0026rsquo;s HSTS (HTTP Strict Transport Security) certificate for a specific website, you can follow these steps to resolve it. HSTS is a security feature that enforces HTTPS connections for websites, and sometimes it can lead to certificate errors. Here\u0026rsquo;s a step-by-step guide on how to fix it: Open Google Chrome and go to the address bar. Type the following URL and press Enter: chrome://net-internals/#hsts In the \u0026ldquo;Delete domain\u0026rdquo; section, you will see a text field. Type the domain name of the website for which you are encountering the HSTS issue into the text field. For example, if you are having problems with \u0026ldquo;example.com,\u0026rdquo; type \u0026ldquo;example.com\u0026rdquo; (without quotes). After typing the domain name, click the \u0026ldquo;Delete\u0026rdquo; button. This action will remove the HSTS settings for the specified domain. Now, in the \u0026ldquo;Query domain\u0026rdquo; section, you will find another text field. Type the same domain name that you just removed in the previous step into this text field. After typing the domain name, click the \u0026ldquo;Query\u0026rdquo; button. If the HSTS settings have been successfully removed, your response should be \u0026ldquo;Not found.\u0026rdquo; This indicates that the HSTS entry for the domain has been deleted, and Chrome will no longer enforce HTTPS for that website. By following these steps, you should be able to resolve the Chrome Certificate HSTS issue for the specific website you are experiencing problems with. Please note that this will only affect the HSTS settings for that particular domain and will not impact other websites\u0026rsquo; security settings. ","date":"20-05-2016","objectID":"/posts/development/how-to-fix-chrome-certificate-hsts-issue/:0:0","tags":null,"title":"How to Fix Chrome Certificate HSTS Issue","uri":"/posts/development/how-to-fix-chrome-certificate-hsts-issue/#"},{"categories":["Development"],"collections":null,"content":"It looks like you\u0026rsquo;re trying to log in as the root user and then switch to another user, www-data, with the su command in a Unix-like operating system. Here\u0026rsquo;s a breakdown of what\u0026rsquo;s happening in your provided commands: $ su: This command is used to switch to another user account. When executed without specifying a username, it assumes you want to switch to the root user. Password:: You\u0026rsquo;ll be prompted to enter the root user\u0026rsquo;s password. You should enter the root password to proceed. su -s /bin/bash www-data: This command attempts to switch to the www-data user, using the -s flag to specify the shell (in this case, /bin/bash). This is usually done to execute commands as the www-data user. Please note that logging in as the root user can be potentially dangerous, as you have full administrative privileges, and mistakes or misconfigurations can have serious consequences. Additionally, switching to other users should be done with caution and only if necessary for specific tasks. If you have any specific questions or need further assistance with this process or related tasks, please let me know, and I\u0026rsquo;ll be happy to help. ","date":"21-04-2016","objectID":"/posts/development/login-to-any-user-as-root/:0:0","tags":null,"title":"Login to any user as root","uri":"/posts/development/login-to-any-user-as-root/#"},{"categories":["Development"],"collections":null,"content":"Port forwarding allows you to expose services running inside a virtual machine, such as Docker-Machine, to your local machine. In this example, we\u0026rsquo;ll use VirtualBox and Docker-Machine to forward port 8080 from the virtual machine to localhost. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:0","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following: Docker-Machine installed. VirtualBox installed. The virtual machine you want to configure, e.g., boot2docker-vm. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:1","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Open a Terminal Open a terminal on your local machine. You\u0026rsquo;ll use this terminal to run commands to configure port forwarding. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:2","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#step-1-open-a-terminal"},{"categories":["Development"],"collections":null,"content":"Step 2: List Virtual Machines Run the following command to list the virtual machines managed by Docker-Machine: docker-machine ls This command will display a list of virtual machines, including their names and status. Make a note of the name of the virtual machine you want to configure for port forwarding. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:3","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#step-2-list-virtual-machines"},{"categories":["Development"],"collections":null,"content":"Step 3: Configure Port Forwarding Use the VBoxManage command to configure port forwarding for your virtual machine. There are two variations of the command you provided in your question: To forward port 8080 from the virtual machine to port 8080 on your local machine: VBoxManage controlvm \u0026lt;your-vm-name\u0026gt; natpf1 \u0026#34;nginx,tcp,,8080,,8080\u0026#34; To forward port 8080 from the virtual machine to port 8080 on 127.0.0.1 (localhost) on your local machine: VBoxManage controlvm \u0026lt;your-vm-name\u0026gt; natpf1 \u0026#34;nginx,tcp,127.0.0.1,8080,,8080\u0026#34; Replace \u0026lt;your-vm-name\u0026gt; with the actual name of your virtual machine obtained from the docker-machine ls command. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:4","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#step-3-configure-port-forwarding"},{"categories":["Development"],"collections":null,"content":"Step 4: Verify Port Forwarding To ensure that the port forwarding is configured correctly, you can run the following command to check the port forwarding rules for your virtual machine: VBoxManage showvminfo \u0026lt;your-vm-name\u0026gt; --machinereadable Look for the section that begins with \u0026quot;Forwarding(0)\u0026quot;. It should display the port forwarding rule you\u0026rsquo;ve just added. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:5","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#step-4-verify-port-forwarding"},{"categories":["Development"],"collections":null,"content":"Step 5: Access the Service Now that you\u0026rsquo;ve configured port forwarding, you can access the service running inside your Docker-Machine virtual machine by accessing http://localhost:8080 in your web browser. That\u0026rsquo;s it! You\u0026rsquo;ve successfully set up port forwarding from your Docker-Machine virtual machine to your local machine, allowing you to access services running in the virtual machine as if they were running locally. ","date":"08-03-2016","objectID":"/posts/development/how-to-port-forward-docker-machine-to-localhost/:0:6","tags":null,"title":"How to Port Forward Docker-Machine to Localhost","uri":"/posts/development/how-to-port-forward-docker-machine-to-localhost/#step-5-access-the-service"},{"categories":["Development"],"collections":null,"content":"Cron logs on Ubuntu 14.04 are typically stored in the /var/log/syslog file. However, if you wish to separate cron-related logs into their own file, you can follow the steps you\u0026rsquo;ve provided to create a dedicated log file for cron messages. Here\u0026rsquo;s how you can do it: Open the 50-default.conf file in the /etc/rsyslog.d/ directory: cd /etc/rsyslog.d/ sudo nano 50-default.conf Uncomment the line that corresponds to cron messages. Remove the \u0026ldquo;#\u0026rdquo; symbol at the beginning of the line: #cron.* /var/log/cron.log Save the file and exit the text editor. Restart the rsyslog service to apply the changes: sudo service rsyslog restart Restart the cron daemon to ensure it starts writing messages to the new log file: sudo service cron restart After following these steps, your cron-related logs will be redirected to the /var/log/cron.log file instead of being mixed with other system logs in /var/log/syslog. This separation can make it easier to monitor and manage cron-related events. Please note that these instructions are specific to Ubuntu 14.04. If you are using a newer version of Ubuntu, the steps might differ slightly. ","date":"07-01-2016","objectID":"/posts/development/ocation-of-cron-logs-on-ubuntu-1404/:0:0","tags":null,"title":"Location of Cron Logs on Ubuntu 14.04","uri":"/posts/development/ocation-of-cron-logs-on-ubuntu-1404/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re a developer using Git for version control and you\u0026rsquo;re looking for a powerful way to visualize the differences between different versions of your code, Vimdiff is a handy tool to have in your toolkit. Vimdiff is a feature-rich text editor that comes with built-in support for comparing and highlighting differences between files, making it an ideal choice for inspecting code changes. In this article, we\u0026rsquo;ll explore how to set up Vimdiff as a diff tool for Git and how to make the most of its features for effective code comparison. ","date":"06-01-2016","objectID":"/posts/development/viewing-all-git-diffs-with-vimdiff/:0:0","tags":null,"title":"Viewing All Git Diffs with Vimdiff","uri":"/posts/development/viewing-all-git-diffs-with-vimdiff/#"},{"categories":["Development"],"collections":null,"content":"Setting Up Vimdiff as Git\u0026rsquo;s Diff Tool To use Vimdiff as the default diff tool for Git, follow these steps: Configure Git to use Vimdiff as the diff tool: git config --global diff.tool vimdiff Disable the diff tool prompt to make the process more seamless: git config --global difftool.prompt false Create a Git alias for the difftool command: git config --global alias.d difftool With these configurations in place, you can now use git d (or git difftool) to invoke Vimdiff and compare different versions of your code. ","date":"06-01-2016","objectID":"/posts/development/viewing-all-git-diffs-with-vimdiff/:0:1","tags":null,"title":"Viewing All Git Diffs with Vimdiff","uri":"/posts/development/viewing-all-git-diffs-with-vimdiff/#setting-up-vimdiff-as-gits-diff-tool"},{"categories":["Development"],"collections":null,"content":"Using Vimdiff to Visualize Code Differences When you use Vimdiff as your Git diff tool, it opens each file in a split view, highlighting the differences between them. Identical code sections are automatically folded, providing a clear view of the differing portions without the distraction of unchanged code. This can be particularly useful when comparing large files or when trying to identify specific changes in a complex codebase. As you navigate through the code, Vimdiff keeps pace with your edits, ensuring that the diff highlighting accurately reflects the changes you make. You can edit either side of the comparison and save your changes without any hassle. However, be aware that if you attempt to modify the repository-stored version of the file, your changes will be discarded upon exit, as Git relies on a temporary copy for diffing purposes. ","date":"06-01-2016","objectID":"/posts/development/viewing-all-git-diffs-with-vimdiff/:0:2","tags":null,"title":"Viewing All Git Diffs with Vimdiff","uri":"/posts/development/viewing-all-git-diffs-with-vimdiff/#using-vimdiff-to-visualize-code-differences"},{"categories":["Development"],"collections":null,"content":"Essential Vimdiff Commands Here are some basic Vimdiff commands that are useful for comparing code and managing differences: dp: Diff Put: Copies changes under the cursor from one buffer to the other, effectively making them identical and removing the difference. do: Diff Obtain: Replaces the change under the cursor with the content from the other buffer, making them identical. Vimdiff also offers navigation commands to move between differences: ]c: Jump to the next diff. [c: Jump to the previous diff. ","date":"06-01-2016","objectID":"/posts/development/viewing-all-git-diffs-with-vimdiff/:0:3","tags":null,"title":"Viewing All Git Diffs with Vimdiff","uri":"/posts/development/viewing-all-git-diffs-with-vimdiff/#essential-vimdiff-commands"},{"categories":["Development"],"collections":null,"content":"Customizing Vimdiff Highlighting By default, Vimdiff highlights changed lines in a distinct color. However, if you find this distracting or wish to customize the highlighting, you can do so by modifying your Vim configuration. Here\u0026rsquo;s an example of how to achieve this: if \u0026amp;diff highlight! link DiffText MatchParen endif The above snippet turns off highlighting for unchanged portions of code while ensuring that the changed lines remain easily distinguishable. This can help you quickly spot differences without being overwhelmed by excessive highlighting. In conclusion, Vimdiff is a powerful tool for comparing code changes in Git repositories. By configuring it as your Git diff tool and leveraging its intuitive features, you can streamline your code review process and make identifying and understanding changes more efficient. Whether you\u0026rsquo;re comparing configuration files or intricate codebases, Vimdiff\u0026rsquo;s capabilities can help you navigate differences with ease. ","date":"06-01-2016","objectID":"/posts/development/viewing-all-git-diffs-with-vimdiff/:0:4","tags":null,"title":"Viewing All Git Diffs with Vimdiff","uri":"/posts/development/viewing-all-git-diffs-with-vimdiff/#customizing-vimdiff-highlighting"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Cygwin provides a Unix-like environment for Windows, including an implementation of OpenSSH, which allows you to establish secure remote connections using the SSH (Secure Shell) protocol. However, connecting to Cygwin\u0026rsquo;s SSH server (sshd) using public key authentication can sometimes present challenges. This blog post will guide you through a troubleshooting process to resolve issues related to connecting to Cygwin sshd with public key authentication. ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:0:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 1: Update /etc/sshd_config Open the Cygwin terminal and navigate to the Cygwin installation directory (typically C:\\cygwin64 or C:\\cygwin). Locate the \u0026ldquo;sshd_config\u0026rdquo; file in the etc directory. The full path should be something like C:\\cygwin64\\etc\\sshd_config. Open the \u0026ldquo;sshd_config\u0026rdquo; file using a text editor (e.g., Notepad++). Look for the \u0026ldquo;StrictModes\u0026rdquo; option and set it to \u0026ldquo;no.\u0026rdquo; This allows more relaxed permission checking for the authorized_keys file. Save the changes and close the text editor. ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:1:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#step-1-update-etcsshd_config"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 2: Generate SSH Key Pair If you haven\u0026rsquo;t already done so, generate an SSH key pair on the client machine using the \u0026ldquo;ssh-keygen\u0026rdquo; command. Make sure to choose a strong passphrase to protect your private key. By default, the key pair will be saved in the \u0026ldquo;.ssh\u0026rdquo; directory in the user\u0026rsquo;s home directory (e.g., C:\\Users\\YourUsername.ssh). ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:2:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#step-2-generate-ssh-key-pair"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 3: Copy Public Key to Cygwin Server In the Cygwin terminal, navigate to the user\u0026rsquo;s home directory (e.g., cd ~). Create the \u0026ldquo;.ssh\u0026rdquo; directory if it doesn\u0026rsquo;t exist: mkdir .ssh Use the \u0026ldquo;scp\u0026rdquo; command to copy the public key to the Cygwin server: scp \u0026lt;public_key_file\u0026gt; user@server:/home/user/.ssh/authorized_keys Replace \u0026ldquo;\u0026lt;public_key_file\u0026gt;\u0026rdquo; with the path to your public key file. Replace \u0026ldquo;user\u0026rdquo; with your username on the Cygwin server. Replace \u0026ldquo;server\u0026rdquo; with the hostname or IP address of the Cygwin server. ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:3:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#step-3-copy-public-key-to-cygwin-server"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 4: Restart SSHD Service In the Cygwin terminal, run the following command to restart the sshd service: net stop sshd \u0026amp;\u0026amp; net start sshd ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:4:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#step-4-restart-sshd-service"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 5: Test SSH Connection On the client machine, open a new terminal or command prompt. Run the following command to connect to the Cygwin server using SSH: ssh user@server Replace \u0026ldquo;user\u0026rdquo; with your username on the Cygwin server. Replace \u0026ldquo;server\u0026rdquo; with the hostname or IP address of the Cygwin server. If the connection is successful and prompts for the passphrase, enter the passphrase associated with your private key. ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:5:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#step-5-test-ssh-connection"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion By following the troubleshooting steps outlined in this blog post, you should be able to connect to the Cygwin sshd server using public key authentication. Remember to ensure that the \u0026ldquo;StrictModes\u0026rdquo; option in the sshd_config file is set to \u0026ldquo;no\u0026rdquo; to allow more flexible permission checking for the authorized_keys file. Generating an SSH key pair, copying the public key to the Cygwin server, and restarting the sshd service are critical steps in establishing a successful SSH connection. ","date":"24-12-2015","objectID":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/:6:0","tags":["windows","cygwin"],"title":"Troubleshooting SSHD Connection Issues with Public Key in Cygwin","uri":"/posts/development/troubleshooting-sshd-connection-issues-with-public-key-in-cygwin/#conclusion"},{"categories":["Development"],"collections":null,"content":"To add a command-line interface (CLI) and command-line arguments (args) to PhantomJS when using Selenium WebDriver in Node.js, you can modify your existing code as follows. I\u0026rsquo;ll provide you with a step-by-step guide and also explain the changes made: const webdriver = require(\u0026#39;selenium-webdriver\u0026#39;); const { Builder } = webdriver; const phantomjs = require(\u0026#39;phantomjs-prebuilt\u0026#39;); // Define your PhantomJS command-line arguments here const phantomjsArgs = [ \u0026#39;--load-images=false\u0026#39;, // Add any other arguments you need here ]; // Set up capabilities with the CLI arguments const capabilities = webdriver.Capabilities.phantomjs(); capabilities.set(\u0026#39;phantomjs.cli.args\u0026#39;, phantomjsArgs); // Set the path to the PhantomJS executable const phantomjsPath = phantomjs.path; capabilities.set(\u0026#39;phantomjs.binary.path\u0026#39;, phantomjsPath); // Create a WebDriver instance with the configured capabilities const driver = new Builder() .withCapabilities(capabilities) .build(); // Example usage: navigate to a webpage driver.get(\u0026#39;https://example.com\u0026#39;) .then(() =\u0026gt; { console.log(\u0026#39;Page title:\u0026#39;, driver.getTitle()); }) .catch(error =\u0026gt; { console.error(\u0026#39;Error:\u0026#39;, error); }); // Don\u0026#39;t forget to quit the driver when done driver.quit(); Here are the changes made to your original code: Import the necessary modules: We import the required modules and classes using destructuring for cleaner code. Define PhantomJS command-line arguments: You can define your PhantomJS command-line arguments in the phantomjsArgs array. In this example, we have set --load-images=false as one argument. You can add any other arguments you need to this array. Set capabilities for PhantomJS: We create a capabilities object for PhantomJS using webdriver.Capabilities.phantomjs() and set the CLI arguments using capabilities.set('phantomjs.cli.args', phantomjsArgs). Specify the path to the PhantomJS executable: We set the path to the PhantomJS executable using capabilities.set('phantomjs.binary.path', phantomjsPath), where phantomjsPath is obtained from the phantomjs-prebuilt package. Create a WebDriver instance: We create a new WebDriver instance using the configured capabilities. Example usage: You can see an example of using the WebDriver to navigate to a webpage and print the page title. You can replace this with your actual automation tasks. Quit the WebDriver: It\u0026rsquo;s important to call driver.quit() when you are done with the WebDriver to ensure resources are properly released. Make sure you have the selenium-webdriver and phantomjs-prebuilt packages installed in your Node.js project. You can install them using npm: npm install selenium-webdriver phantomjs-prebuilt With these modifications, your PhantomJS WebDriver setup will include the specified CLI arguments. ","date":"16-12-2015","objectID":"/posts/development/add-command-line-interface-cli-args-on-phantomjs/:0:0","tags":null,"title":"Add Command Line Interface (CLI) Args On PhantomJS","uri":"/posts/development/add-command-line-interface-cli-args-on-phantomjs/#"},{"categories":["Development"],"collections":null,"content":"Thank you for providing the information and the modified Bash script. Below, I\u0026rsquo;ll provide a breakdown of the script\u0026rsquo;s functionality and highlight the changes made for clarity. This will be presented in Markdown format as requested. ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:0:0","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#"},{"categories":["Development"],"collections":null,"content":"Bash Script: Check WiFi Connection and Send Email This Bash script checks whether the WiFi connection is active and sends an email when it reconnects. Here\u0026rsquo;s an overview of the script\u0026rsquo;s key points and improvements: ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:0","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#bash-script-check-wifi-connection-and-send-email"},{"categories":["Development"],"collections":null,"content":"Email Sending The script has been enhanced to include email sending functionality using the mail command. Ensure you replace 'youremail@example.com' with your actual email address. #!/bin/bash # Check if WiFi is not associated (disconnected) if /sbin/iwconfig wlan0 | grep -o \u0026#34;Access Point: Not-Associated\u0026#34; \u0026gt; /dev/null then # If there\u0026#39;s no downtime record, create one if [ ! -f ~/.downtime ] then date \u0026gt; ~/.downtime fi # Restart the network manager (requires sudo) sudo service network-manager restart \u0026gt; /dev/null else # If WiFi was previously disconnected, send an email if [ -f ~/.downtime ] then disconnected_date=$(\u0026lt;~/.downtime) connected_date=$(date) rm ~/.downtime # Send an email mail -s \u0026#34;WiFi Reconnected\u0026#34; youremail@example.com \u0026lt;\u0026lt;EOF WiFi was disconnected at: $disconnected_date WiFi reconnected at: $connected_date EOF fi fi ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:1","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#email-sending"},{"categories":["Development"],"collections":null,"content":"Command Syntax The script uses commands like iwconfig and service that require administrative privileges (sudo). Ensure that the script is executed with the appropriate permissions. ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:2","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#command-syntax"},{"categories":["Development"],"collections":null,"content":"Logging The script logs downtime and uptime information in a file (~/.downtime). Ensure that your script has write access to this location (~ refers to the home directory of the user running the script). ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:3","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#logging"},{"categories":["Development"],"collections":null,"content":"Error Handling Consider adding error handling to your script, especially for commands like iwconfig and service to log errors for debugging purposes. ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:4","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#error-handling"},{"categories":["Development"],"collections":null,"content":"Configuration This script assumes you have a functioning mail command on your system, and it uses the local mail transfer agent (MTA). Configure your system\u0026rsquo;s email settings for proper functionality. ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:5","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#configuration"},{"categories":["Development"],"collections":null,"content":"Automation Consider setting up a cron job to run this script periodically if you want continuous monitoring. By implementing these changes and considerations, your Bash script can effectively monitor WiFi connectivity and send email notifications when the connection is reestablished. ","date":"16-12-2015","objectID":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/:1:6","tags":null,"title":"Check Whenever Wifi Is Connected And Send Email When Its Connected","uri":"/posts/development/check-whenever-wifi-is-connected-and-send-email-when-its-connected/#automation"},{"categories":["Development"],"collections":null,"content":"In order to change the default sender name when sending an email using the mail or mailx command, you can follow these steps. This will allow you to replace the default \u0026ldquo;root\u0026rdquo; with your desired name. Please note that these instructions are typically applicable to Unix-like systems such as Linux. Change the Full Name: To change the full name associated with your user account, you can use the chfn (change finger) command. Open your terminal and run the following command, replacing \u0026quot;Your Full Name\u0026quot; with the name you want to use as the sender: chfn -f \u0026#34;Your Full Name\u0026#34; For example: chfn -f \u0026#34;John Doe\u0026#34; This will update the full name associated with your user account. Configure Email Client: If you are using mail or mailx to send emails from the command line, you can configure it to use your updated full name when sending emails. Unfortunately, mail and mailx don\u0026rsquo;t provide built-in options for changing the sender\u0026rsquo;s name. Instead, they typically use the system\u0026rsquo;s default settings. However, you can work around this by using the -r option to specify the \u0026ldquo;From\u0026rdquo; address when sending emails. Here\u0026rsquo;s how you can send an email with your updated full name as the sender using mail: echo \u0026#34;This is the body of the email\u0026#34; | mail -s \u0026#34;Subject\u0026#34; -r \u0026#34;Your Full Name \u0026lt;your-email@example.com\u0026gt;\u0026#34; recipient@example.com Replace \u0026quot;Your Full Name\u0026quot; with your updated full name, \u0026quot;your-email@example.com\u0026quot; with your email address, and recipient@example.com with the recipient\u0026rsquo;s email address. For example: echo \u0026#34;Hello there!\u0026#34; | mail -s \u0026#34;Greetings\u0026#34; -r \u0026#34;John Doe \u0026lt;john@example.com\u0026gt;\u0026#34; recipient@example.com This will send an email with \u0026ldquo;John Doe\u0026rdquo; as the sender\u0026rsquo;s name. By following these steps, you can change the sender\u0026rsquo;s name when sending emails from the command line using mail or mailx, and it will display your desired name instead of the default \u0026ldquo;root.\u0026rdquo; ","date":"15-12-2015","objectID":"/posts/development/change-default-form-root-when-sending-email-from-mail-or-mailx/:0:0","tags":null,"title":"Change Default Form Root When Sending Email From Mail Or Mailx","uri":"/posts/development/change-default-form-root-when-sending-email-from-mail-or-mailx/#"},{"categories":["Development"],"collections":null,"content":"Certainly! To update a cron task and customize the email sent from it, you can use the following command with the \u0026lt;CRON COMMAND\u0026gt; replaced by your actual command: \u0026lt;CRON COMMAND\u0026gt; | mail -E -s \u0026#34;Subject\u0026#34; -r \u0026#34;CUSTOM FORM NAME \u0026lt;user@mail.com\u0026gt;\u0026#34; This command will execute \u0026lt;CRON COMMAND\u0026gt; and send the output as an email with the specified subject and sender address. Here\u0026rsquo;s a breakdown of the command: \u0026lt;CRON COMMAND\u0026gt;: Replace this with the actual command you want to run on a schedule using cron. mail: This is the command used to send an email. -E: This option ensures that the email is not sent if the content is empty. -s \u0026quot;Subject\u0026quot;: This sets the subject of the email to \u0026ldquo;Subject.\u0026rdquo; Replace \u0026ldquo;Subject\u0026rdquo; with your desired email subject. -r \u0026quot;CUSTOM FORM NAME \u0026lt;user@mail.com\u0026gt;\u0026quot;: This sets the sender address and name. Replace \u0026ldquo;CUSTOM FORM NAME\u0026rdquo; with the desired sender name, and \u0026ldquo;user@mail.com\u0026rdquo; with the sender\u0026rsquo;s email address. After updating your cron task with this command, it will send emails with the specified customization. ","date":"15-12-2015","objectID":"/posts/development/change-from-value-on-cron-email/:0:0","tags":null,"title":"Change From Value On Cron Email","uri":"/posts/development/change-from-value-on-cron-email/#"},{"categories":["Development"],"collections":null,"content":"When you want to silence the result of a command in a Unix-like shell environment, you can use the \u0026gt; /dev/null or 2\u0026gt; /dev/null redirection techniques, depending on whether you want to suppress standard output (stdout) or standard error (stderr) respectively. Here\u0026rsquo;s how you can use them: To silence standard output (stdout) of a command, use \u0026gt; /dev/null: command \u0026gt; /dev/null This will discard the normal output of the command. To silence standard error (stderr) of a command, use 2\u0026gt; /dev/null: command 2\u0026gt; /dev/null This will discard error messages and keep only the standard output. Here\u0026rsquo;s an example of how you might use these redirections in practice: # Silencing standard output (stdout) ls /nonexistent-directory \u0026gt; /dev/null # Silencing standard error (stderr) ls /nonexistent-directory 2\u0026gt; /dev/null In the first command, the standard output of the ls command is redirected to /dev/null, so you won\u0026rsquo;t see the list of files and directories in /nonexistent-directory. In the second command, the standard error is redirected to /dev/null, so you won\u0026rsquo;t see any error messages if the directory doesn\u0026rsquo;t exist or if there\u0026rsquo;s a permission issue. ","date":"15-12-2015","objectID":"/posts/development/silence-result-of-command/:0:0","tags":null,"title":"Silence Result Of Command","uri":"/posts/development/silence-result-of-command/#"},{"categories":["Development"],"collections":null,"content":"You want to configure for Fail2ban to block SSH public key connection attempts using a custom filter and jail configuration. This setup is designed to identify and block IP addresses that attempt to make SSH key-based connections and fail authentication multiple times. Let\u0026rsquo;s break down your configuration step by step. ","date":"27-11-2015","objectID":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/:0:0","tags":null,"title":"Fail2ban Block SSH Public Key Connection Attempt","uri":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/#"},{"categories":["Development"],"collections":null,"content":"Jail Configuration (jail.local) In your jail.local configuration, you have defined a custom jail section for SSH key-based authentication: [ssh-key] enabled = true port = ssh filter = sshd-key logpath = /var/log/auth.log maxretry = 3 [ssh-key]: This is the name of your custom jail section. It allows you to specify different configurations for different services or purposes. enabled = true: This indicates that the jail is enabled and will be active. port = ssh: Specifies the port or service that Fail2ban should monitor. In this case, it\u0026rsquo;s set to SSH, which is commonly used for remote shell access. filter = sshd-key: Specifies the filter to use for this jail. The filter is defined in the filter.d directory (sshd-key). logpath = /var/log/auth.log: Defines the path to the log file where Fail2ban will look for SSH authentication attempts. In this case, it\u0026rsquo;s /var/log/auth.log, which is a typical location for authentication logs on Linux systems. maxretry = 3: Sets the maximum number of authentication failures before an IP address is banned. If an IP address fails authentication three times (maxretry times), Fail2ban will take action. ","date":"27-11-2015","objectID":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/:1:0","tags":null,"title":"Fail2ban Block SSH Public Key Connection Attempt","uri":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/#jail-configuration-jaillocal"},{"categories":["Development"],"collections":null,"content":"Custom Filter Configuration (filter.d/sshd-key) Your custom filter configuration is defined in the filter.d/sshd-key file. Let\u0026rsquo;s take a closer look at its content: [Definition] failregex = sshd(?:\\[\\d+\\])?: Connection closed by \u0026lt;HOST\u0026gt; .*preauth.*\\s*$ ignoreregex = [Definition]: This section header defines the filter\u0026rsquo;s main configuration. failregex: This line contains a regular expression pattern that matches lines in the SSH authentication log (/var/log/auth.log) indicating a failed key-based authentication attempt. It captures the IP address of the host (\u0026lt;HOST\u0026gt;) in the log line. sshd(?:\\[\\d+\\])?: Connection closed by \u0026lt;HOST\u0026gt; .*preauth.*\\s*$: This regular expression is used to identify failed key-based authentication attempts in the log file. ignoreregex: This line is currently empty, indicating that there are no patterns to be ignored. You can add patterns here to exclude certain log entries from being processed by Fail2ban. In summary, your Fail2ban configuration is set up to monitor SSH key-based authentication attempts on port 22 (SSH) and block IP addresses that fail authentication three times or more. It uses a custom filter (sshd-key) to identify failed key-based authentication attempts in the SSH authentication log. Make sure to test this configuration and monitor Fail2ban\u0026rsquo;s actions to ensure it works as expected in your specific environment. ","date":"27-11-2015","objectID":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/:2:0","tags":null,"title":"Fail2ban Block SSH Public Key Connection Attempt","uri":"/posts/development/fail2ban-block-ssh-public-key-connection-attempt/#custom-filter-configuration-filterdsshd-key"},{"categories":["Development"],"collections":null,"content":"Fail2Ban is a powerful tool for protecting your server against brute-force attacks by banning IP addresses that repeatedly fail authentication attempts. While it can efficiently ban these IPs, you might also want to receive email notifications when such bans occur. This guide will walk you through configuring Fail2Ban to send email notifications when it bans an IP address. ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:0:0","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure you have the following: A server running Fail2Ban (you can install it using your package manager). A working email setup on your server (you can use a local MTA like Postfix or an external SMTP server). Basic knowledge of editing configuration files. ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:1:0","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Configuration Steps ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:0","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#configuration-steps"},{"categories":["Development"],"collections":null,"content":"1. Open the Jail Configuration File First, open the Fail2Ban jail configuration file. This is usually located at /etc/fail2ban/jail.local or /etc/fail2ban/jail.conf. [jail.local] ... ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:1","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#1-open-the-jail-configuration-file"},{"categories":["Development"],"collections":null,"content":"2. Configure the Jail Find the jail configuration section for the service you want to protect. In your example, it\u0026rsquo;s [auth-login]. [auth-login] enabled = true filter = auth-login logpath = /var/log/apache2/access.log action = iptables-multiport[name=NoAuthFailures, port=\u0026#34;http,https\u0026#34;] banTime = 3600 findtime = 60 maxRetry = 3 ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:2","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#2-configure-the-jail"},{"categories":["Development"],"collections":null,"content":"3. Configure the Email Action Add an email action to the jail configuration using the mail-whois action. Make sure to set the dest parameter to your desired email address, where you want to receive notifications. [auth-login] enabled = true filter = auth-login logpath = /var/log/apache2/access.log action = iptables-multiport[name=NoAuthFailures, port=\u0026#34;http,https\u0026#34;] mail-whois[name=NoAuthFailures, dest=john@example.com] banTime = 3600 findtime = 60 maxRetry = 3 ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:3","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#3-configure-the-email-action"},{"categories":["Development"],"collections":null,"content":"4. Configure Email Settings You need to configure your email settings in Fail2Ban. This involves specifying the SMTP server details. This can usually be done in the jail.local file or in the jail.d/defaults-debian.conf (or equivalent) file. Here\u0026rsquo;s an example: [DEFAULT] # Email settings destemail = john@example.com sendername = Fail2Ban mta = sendmail destemail: The email address where you want to receive notifications. sendername: The name that will appear as the sender of the email. mta: The mail transfer agent to use. Set it to your server\u0026rsquo;s mail system (e.g., sendmail for a local Postfix setup). ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:4","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#4-configure-email-settings"},{"categories":["Development"],"collections":null,"content":"5. Restart Fail2Ban After making these changes, restart Fail2Ban to apply the new configuration: sudo systemctl restart fail2ban ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:5","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#5-restart-fail2ban"},{"categories":["Development"],"collections":null,"content":"6. Test the Configuration You can test if the email notifications are working by triggering a ban. Try deliberately failing authentication a few times (e.g., incorrect login attempts) to exceed the maxRetry value specified in the jail configuration. Fail2Ban should then ban the IP address and send you an email notification. That\u0026rsquo;s it! You\u0026rsquo;ve successfully configured Fail2Ban to send email notifications when it bans IP addresses. This can be a valuable addition to your server\u0026rsquo;s security monitoring setup. ","date":"27-11-2015","objectID":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/:2:6","tags":null,"title":"How to Configure Fail2Ban to Send Email Notifications when Banning IP Addresses","uri":"/posts/development/how-to-configure-fail2ban-to-send-email-notifications-when-banning-ip-addresses/#6-test-the-configuration"},{"categories":["Development"],"collections":null,"content":"Fail2Ban is a valuable security tool that can help protect your server from unauthorized access attempts, including those targeting Laravel\u0026rsquo;s authentication system. In this article, we\u0026rsquo;ll guide you through setting up Fail2Ban to block authentication attempts and access to the /auth/login URL. ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:0:0","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before we get started, make sure you have the following prerequisites in place: Fail2Ban Installed: Ensure that Fail2Ban is installed on your server. You can install it using your distribution\u0026rsquo;s package manager (e.g., apt, yum, dnf). Apache Web Server: This tutorial assumes you\u0026rsquo;re using the Apache web server. If you\u0026rsquo;re using a different web server, you\u0026rsquo;ll need to adjust the configuration accordingly. ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:1:0","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Configuring Fail2Ban ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:2:0","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#configuring-fail2ban"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a Filter for Auth/Login Attempts Create a custom filter to match authentication attempts to the /auth/login URL. Create or edit the /etc/fail2ban/filter.d/login-auth.conf file with the following content: [Definition] failregex = ^\u0026lt;HOST\u0026gt; .* \u0026#34;POST /auth/login ignoreregex = This filter definition will look for log lines containing \u0026lt;HOST\u0026gt; (the IP address) and \u0026ldquo;POST /auth/login.\u0026rdquo; ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:2:1","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#step-1-create-a-filter-for-authlogin-attempts"},{"categories":["Development"],"collections":null,"content":"Step 2: Create a Jail for Auth/Login Attempts Next, create a jail configuration in the /etc/fail2ban/jail.local file. Add the following lines: [auth-login] enabled = true filter = auth-login logpath = /var/log/apache2/access.log action = iptables-multiport[name=NoAuthFailures, port=\u0026#34;http,https\u0026#34;] banTime = 3600 findtime = 60 maxRetry = 3 enabled: Set to true to enable this jail. filter: This should match the name of the filter you created earlier (auth-login). logpath: Specify the path to your Apache access log file. action: Use the iptables-multiport action to block IP addresses. banTime: The duration (in seconds) for which an IP address will be banned (1 hour in this case). findtime: The time window (in seconds) during which Fail2Ban will look for repeated login attempts (60 seconds in this case). maxRetry: The number of failed login attempts that trigger a ban (3 attempts in this case). ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:2:2","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#step-2-create-a-jail-for-authlogin-attempts"},{"categories":["Development"],"collections":null,"content":"Step 3: Restart Fail2Ban After making these changes, restart Fail2Ban to apply the new configuration: sudo service fail2ban restart ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:2:3","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#step-3-restart-fail2ban"},{"categories":["Development"],"collections":null,"content":"Testing the Configuration To test if Fail2Ban is working correctly, attempt to access the /auth/login URL multiple times from a different IP address. After reaching the maximum number of allowed retries (maxRetry), Fail2Ban should ban the IP address for the specified banTime. You can check the status of banned IP addresses using the following command: sudo fail2ban-client status auth-login ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:3:0","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#testing-the-configuration"},{"categories":["Development"],"collections":null,"content":"Conclusion By following these steps, you can configure Fail2Ban to block Laravel authentication attempts and access to the /auth/login URL. This helps enhance the security of your server by automatically banning IP addresses that exhibit suspicious behavior. ","date":"27-11-2015","objectID":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/:4:0","tags":null,"title":"How To Use Fail2Ban To Block Laravel Auth Attempts And Other Auth/Login URL Access","uri":"/posts/development/how-to-use-fail2ban-to-block-laravel-auth-attempts-and-other-authlogin-url-access/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re encountering issues while running Vim with Vundle on Bash in Windows 10, such as the \u0026ldquo;Unknown Command ^M\u0026rdquo; error, NERDTree problems, mouse dragging not working, or font issues, this guide will help you troubleshoot and resolve these issues. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:0:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#"},{"categories":["Development"],"collections":null,"content":"1. Fixing the \u0026ldquo;Unknown Command ^M\u0026rdquo; Error The \u0026ldquo;Unknown Command ^M\u0026rdquo; error is often caused by inconsistent line endings in your Vim files. To resolve this issue, you can configure Git to use consistent line endings. git config --global core.autocrlf input This command configures Git to use LF (Unix-style) line endings when checking out files. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:1:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#1-fixing-the"},{"categories":["Development"],"collections":null,"content":"2. NERDTree Can\u0026rsquo;t Expand Folder and Weird Icons If NERDTree isn\u0026rsquo;t working as expected and you see weird icons, ensure that your .vimrc file is correctly configured. Make sure you have the following settings: .vimrc ... set encoding=utf-8 ... This setting helps with proper character encoding, which may resolve the icon display issue. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:2:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#2-nerdtree-can"},{"categories":["Development"],"collections":null,"content":"3. Mouse Can\u0026rsquo;t Drag Split Windows If you\u0026rsquo;re unable to drag split windows with the mouse, you can enable mouse support in Vim by adding the following lines to your .vimrc: .vimrc ... set mouse=a set ttymouse=xterm2 ... These settings enable mouse support within Vim and should allow you to drag split windows with the mouse. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:3:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#3-mouse-can"},{"categories":["Development"],"collections":null,"content":"4. Setting Font to Consolas To set the font to Consolas in the Command Prompt, you need to modify the Command Prompt settings. Here\u0026rsquo;s how you can do it: Open the Command Prompt. Right-click on the title bar and select \u0026ldquo;Properties.\u0026rdquo; In the \u0026ldquo;Font\u0026rdquo; tab, select \u0026ldquo;Consolas\u0026rdquo; from the list of fonts. Click \u0026ldquo;OK\u0026rdquo; to save the changes. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:4:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#4-setting-font-to-consolas"},{"categories":["Development"],"collections":null,"content":"5. Unlinking Homebrew Formulae The command you provided unlinks all installed Homebrew formulae. Be cautious with this command, as it can have unintended consequences. Ensure that you understand the implications before running it. If you want to unlink a specific formula, you can replace $line with the name of the formula you want to unlink. brew list -1 | while read line; do brew unlink $line; done This command reads the list of installed formulae and unlinks each one. By following these troubleshooting steps and ensuring your Vim and Bash configurations are set correctly, you should be able to resolve the issues you\u0026rsquo;re experiencing while using Vim with Vundle on Bash in Windows 10. ","date":"27-11-2015","objectID":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/:5:0","tags":null,"title":"Troubleshooting Vim Issues on Bash in Windows 10","uri":"/posts/development/troubleshooting-vim-issues-on-bash-in-windows-10/#5-unlinking-homebrew-formulae"},{"categories":["Development"],"collections":null,"content":"Bash configurations on Linux and macOS can be confusing for many people, myself included. I\u0026rsquo;ve written this short guide to remind you and me both of a reasonable set of conventions you could follow. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:0:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#"},{"categories":["Development"],"collections":null,"content":"Login Shell vs. Non-login Shell When logging in via the console (e.g., an SSH session, the scary console login after you\u0026rsquo;ve messed up your GUI settings, etc.), you are starting a login shell. If you open a terminal application (e.g., xterm, etc.) from your desktop, then you are starting a non-login shell (except on macOS, discussed later). ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:1:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#login-shell-vs-non-login-shell"},{"categories":["Development"],"collections":null,"content":"Linux (Ubuntu specifically) On a clean install of Ubuntu, you\u0026rsquo;ll notice your home directory contains both a .profile and .bashrc file. Starting a login shell executes .profile, and starting a non-login shell executes .bashrc. Notice that inside .profile, you\u0026rsquo;ll find: # if running bash if [ -n \u0026#34;$BASH_VERSION\u0026#34; ]; then # include .bashrc if it exists if [ -f \u0026#34;$HOME/.bashrc\u0026#34; ]; then . \u0026#34;$HOME/.bashrc\u0026#34; fi fi This means that login shells execute .profile and then source .bashrc, while non-login shells execute .bashrc only. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:2:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#linux-ubuntu-specifically"},{"categories":["Development"],"collections":null,"content":"Be Aware (if you add a .bash_profile) You should also be aware that to start a login shell, Bash looks for .bash_profile, .bash_login, and .profile in that order, and it only reads and executes the first one it finds. By default, the first two are not present on Ubuntu. Programs like RVM add a .bash_profile file, so you should be sure to append the following to the added .bash_profile file: [[ -s \u0026#34;${HOME}/.profile\u0026#34; ]] \u0026amp;\u0026amp; source \u0026#34;${HOME}/.profile\u0026#34; Sourcing .profile means that now, every time you start a login shell, .bash_profile is executed, then .profile, and finally .bashrc. Starting a non-login shell will just execute .bashrc as before. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:3:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#be-aware-if-you-add-a-bash_profile"},{"categories":["Development"],"collections":null,"content":"macOS On a clean install of macOS, you should have a .bashrc file and a .bash_profile file. Unlike most of the Unix/Linux world, macOS terminal applications (e.g., Terminal, iTerm2, etc.) start a login shell. So whether you SSH into a macOS machine or launch a terminal application, Bash will launch as a login shell. While Ubuntu makes use of .profile by default, macOS chose to use .bash_profile (and no .profile file), which takes precedence on the list given above. Inside the .bash_profile on macOS, you\u0026rsquo;ll find something like: [[ -s ~/.bashrc ]] \u0026amp;\u0026amp; source ~/.bashrc Just as Ubuntu\u0026rsquo;s .profile sourced .bashrc, macOS\u0026rsquo;s .bash_profile sources .bashrc too. On macOS, whether you log in via a GUI and open a terminal application, SSH in, or log in at a console, you\u0026rsquo;ll be starting a login shell which will execute .bash_profile and then source .bashrc. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:4:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#macos"},{"categories":["Development"],"collections":null,"content":"Where do I make changes? Whether you use Linux or macOS, any bash-related changes, such as adding aliases, functions, or tweaking the prompt appearance can be appended to .bashrc. If you\u0026rsquo;ve set up sourcing as described above, .bashrc is executed in both login and non-login shells on both Linux and macOS. Another related option is to append: [[ -s \u0026#34;${HOME}/.local.bash\u0026#34; ]] \u0026amp;\u0026amp; source \u0026#34;${HOME}/.local.bash\u0026#34; to .bashrc and then make all further bash customization changes to .local.bash. This seems to be common on company-issued machines since admins don\u0026rsquo;t like users mucking around with .bashrc. If this is the case for you, then make your bash config changes to .local.bash. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:5:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#where-do-i-make-changes"},{"categories":["Development"],"collections":null,"content":"When NOT to modify .bashrc? As the name implies, .bashrc is for bash configs. Environment variables or other configuration settings should typically be written to .profile on Ubuntu and .bash_profile on macOS. A common desire is to extend the PATH variable: # Add path to Python scripts directory PATH=/usr/local/share/python:$PATH On Linux, you would append this path extension to your .profile, unless you\u0026rsquo;ve set up a .bash_profile that sources .profile, which then sources .bashrc (just choose a strategy and be consistent). Logging into your machine again, every terminal session will have the PATH you defined. This happens because .profile (or .bash_profile) is executed at login, before any non-login shells are started. I do most of my PATH modifications when first configuring a machine, so it is no problem to apply changes at the next login. If you need to apply your change now, in your current non-login shell, you could: source ~/.profile # or ~/.bash_profile On macOS, you should add the line to .bash_profile, although you could optionally set up .bash_profile to source .profile and .bashrc for symmetry with Ubuntu and put the line in .profile. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:6:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#when-not-to-modify-bashrc"},{"categories":["Development"],"collections":null,"content":"Errors [Include details about common errors and how to troubleshoot them here.] Remember, understanding the nuances of these configuration files is essential for customizing your Bash environment effectively on Linux and macOS. By following these conventions, you can keep your shell configurations organized and avoid common pitfalls. ","date":"20-11-2015","objectID":"/posts/development/bash-configurations-demystified/:7:0","tags":null,"title":"Bash Configurations Demystified","uri":"/posts/development/bash-configurations-demystified/#errors"},{"categories":["Development"],"collections":null,"content":"If you prefer working in an xterm rather than the GUI version of Vim, you might encounter some inconveniences. One common issue is copying text from Vim within the xterm, which includes line numbers. The GUI version of Vim handles this better, as it selects only the text, leaving out the line numbers. However, you can achieve similar functionality in the xterm version of Vim by adding the following line to your vimrc: :set mouse=a This setting enables mouse support, allowing you to select text without including line numbers. You can also selectively enable mouse support for specific modes by using a different argument in place of \u0026lsquo;a\u0026rsquo;. While you might be more comfortable using the keyboard for most tasks, when it comes to transferring text between X applications or xterms, using the mouse can be more efficient. If you have a modern mouse with a scroll wheel that can also function as a middle mouse button, you can simulate the mouse wheel\u0026rsquo;s behavior in GUI Vim by implementing some mappings in your vimrc and configuring VT100 translations in your .Xresources file. After editing the .Xresources file, make sure to run the following command for the changes to take effect: xrdb -load .Xresources Note that the changes won\u0026rsquo;t affect the currently running xterm; you\u0026rsquo;ll need to open a new one to see the mouse wheel scrolling in action. However, please be aware that the mappings provided in the Vim documentation might not work perfectly for all setups. You might need to adjust them to match your system\u0026rsquo;s configuration. For instance, you may need to replace \u0026lt;M-Esc\u0026gt; with \u0026lt;xCSI\u0026gt; in the mappings. Here\u0026rsquo;s an example: :map \u0026lt;xCSI\u0026gt;[62~ \u0026lt;MouseDown\u0026gt; Once you\u0026rsquo;ve made these adjustments and loaded a large text file, you can enjoy mousewheel scrolling in Vim. References :help 'number' :help 'mouse' :help 'wheel' :help keycodes Comments Pasting into an xterm Vim can be a bit tricky. To paste text from your browser into Vim, follow these steps: Select the text in your browser. Go to Vim. Enter insert mode (i or I). Middle-click (button 2) to paste. However, if you don\u0026rsquo;t set paste mode, you might encounter issues with indentation and formatting. To toggle paste mode in Vim, you can use a key binding. Here\u0026rsquo;s an example of how to set it up in your vimrc: \u0026#34; F11 to toggle paste mode map \u0026lt;F11\u0026gt; :set invpaste\u0026lt;CR\u0026gt; set pastetoggle=\u0026lt;F11\u0026gt; This configuration allows you to toggle paste mode with the F11 key, both in insert mode and normal mode. If you want to use Vim in xterm with :set mouse=a, you might notice that mouse middle-click paste no longer works as expected. In this case, use Shift+middle-click to paste the X selection. In summary, you can enjoy the convenience of using the mouse in Vim while working in an xterm by configuring mouse support and adjusting your settings to ensure smooth text copying and pasting. ","date":"19-11-2015","objectID":"/posts/development/using-the-mouse-for-vim-in-an-xterm/:0:0","tags":null,"title":"Using The Mouse For Vim In An Xterm","uri":"/posts/development/using-the-mouse-for-vim-in-an-xterm/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;ve suddenly noticed that the default Bower directory bower_components has changed to src/vendor in your project, there could be a few reasons for this unexpected behavior. One common reason is the presence of a .bowerrc file in your project\u0026rsquo;s parent directory, which might be overriding the default configuration. Here\u0026rsquo;s how you can resolve this issue: ","date":"05-09-2015","objectID":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/:0:0","tags":null,"title":"Bower Default Bower Components Suddenly Changed To Src Vendor","uri":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Check for .bowerrc Files First, navigate to your project\u0026rsquo;s root directory and its parent directories to check for any .bowerrc files. These files can contain Bower configuration settings that override the defaults. You may have one in the project root or in any parent directories. ","date":"05-09-2015","objectID":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/:1:0","tags":null,"title":"Bower Default Bower Components Suddenly Changed To Src Vendor","uri":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/#step-1-check-for-bowerrc-files"},{"categories":["Development"],"collections":null,"content":"Step 2: Review .bowerrc Content If you find a .bowerrc file, open it and review its contents. It might look something like this: In this example, the \u0026quot;directory\u0026quot; setting is configured to use src/vendor as the Bower components directory. This setting is what\u0026rsquo;s causing Bower to place components in the src/vendor directory instead of the default bower_components. ","date":"05-09-2015","objectID":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/:2:0","tags":null,"title":"Bower Default Bower Components Suddenly Changed To Src Vendor","uri":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/#step-2-review-bowerrc-content"},{"categories":["Development"],"collections":null,"content":"Step 3: Remove or Modify .bowerrc To revert to the default behavior and have Bower components installed in the bower_components directory, you can either remove the .bowerrc file or modify it as follows: Remove .bowerrc: If you don\u0026rsquo;t need any custom Bower configuration and want to use the default settings, simply delete the .bowerrc file. Modify .bowerrc: If you have a specific reason for using a custom directory like src/vendor, you can modify the .bowerrc file to specify a different directory, or you can change it back to the default bower_components: ","date":"05-09-2015","objectID":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/:3:0","tags":null,"title":"Bower Default Bower Components Suddenly Changed To Src Vendor","uri":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/#step-3-remove-or-modify-bowerrc"},{"categories":["Development"],"collections":null,"content":"Step 4: Reinstall Bower Components After making changes to the .bowerrc file or removing it, you should reinstall your Bower components to ensure they are placed in the correct directory. You can do this by running the following command in your project\u0026rsquo;s root directory: bower install This command will read the updated or removed .bowerrc file and install the components accordingly. By following these steps, you should be able to resolve the issue of Bower components being placed in the src/vendor directory and return to the default bower_components directory. ","date":"05-09-2015","objectID":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/:4:0","tags":null,"title":"Bower Default Bower Components Suddenly Changed To Src Vendor","uri":"/posts/development/bower-default-bowercomponents-suddenly-changed-to-src-vendor/#step-4-reinstall-bower-components"},{"categories":["Development"],"collections":null,"content":"When working in an SSH session or terminal, you can navigate and scroll up and down on the screen using a combination of keyboard shortcuts and Vim-like commands. Here\u0026rsquo;s how to do it: Ctrl + A, ESC: This combination is used to enter the \u0026ldquo;copy mode\u0026rdquo; in the terminal. Pressing Ctrl + A followed by ESC allows you to navigate and scroll using Vim-like keybindings. Vim-like Commands: Once you are in copy mode, you can use the following Vim-like commands to navigate and scroll: Ctrl + U: Scroll half a page up. Ctrl + D: Scroll half a page down. Ctrl + B: Scroll a full page up (similar to Page Up key). Ctrl + F: Scroll a full page down (similar to Page Down key). Arrow keys (Up, Down, Left, Right) can be used to move the cursor position. v: Enter visual mode to select text. y: Copy (yank) the selected text. q: Quit copy mode. Here\u0026rsquo;s a breakdown of how to use these commands: Press Ctrl + A followed by ESC to enter copy mode. Use the arrow keys to move the cursor to the desired starting position. Press v to enter visual mode. Move the cursor to select text (similar to how you would do it in Vim). Once the text is selected, press y to copy it. You can now use Ctrl + U, Ctrl + D, Ctrl + B, or Ctrl + F to scroll up or down to the desired location. To paste the copied text, right-click or use Ctrl + Shift + V (paste keyboard shortcut). These commands and shortcuts can be very useful when you need to navigate and scroll through large amounts of text in an SSH session or terminal. They provide a more efficient way to move around and copy text compared to using the mouse or traditional scrolling methods. ","date":"06-06-2015","objectID":"/posts/development/navigate-scroll-up-and-down-on-screen-ssh/:0:0","tags":null,"title":"Navigate Scroll Up And Down On Screen SSH","uri":"/posts/development/navigate-scroll-up-and-down-on-screen-ssh/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Cron jobs are an essential part of automating tasks on a Unix-like operating system. However, occasionally, you may encounter issues where a cron job hangs or fails to run as expected. In this blog post, we\u0026rsquo;ll address a specific scenario where a cron job hangs in the APT script on Ubuntu and provide a solution to resolve it. ","date":"03-05-2015","objectID":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/:0:0","tags":["linux","apt","cron"],"title":"Troubleshooting a Hanging Cron Job in Ubuntu's APT Script","uri":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 1: Locate the APT Cron Job Script The APT script is responsible for automatically updating the package lists and installing updates on Ubuntu systems. To begin troubleshooting, open a terminal and run the following command to locate the script: sudo vim /etc/cron.daily/apt ","date":"03-05-2015","objectID":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/:1:0","tags":["linux","apt","cron"],"title":"Troubleshooting a Hanging Cron Job in Ubuntu's APT Script","uri":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/#step-1-locate-the-apt-cron-job-script"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 2: Adjusting the RandomSleep Setting Within the APT cron script, there is a parameter called RandomSleep, which adds a random delay before executing the script. In some cases, this delay can cause the cron job to hang indefinitely. To resolve this issue, follow these steps: Once the APT script is open in the Vim editor, use the arrow keys to navigate to the line that contains the RandomSleep parameter. Change the value of RandomSleep from the default setting (e.g., RandomSleep=1800) to 0, which will eliminate the random delay. Save the changes and exit the Vim editor by pressing the Esc key, typing :wq, and pressing Enter. ","date":"03-05-2015","objectID":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/:2:0","tags":["linux","apt","cron"],"title":"Troubleshooting a Hanging Cron Job in Ubuntu's APT Script","uri":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/#step-2-adjusting-the-randomsleep-setting"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 3: Testing the Modified Cron Job Now that the RandomSleep value has been set to 0, it\u0026rsquo;s time to test whether the cron job executes properly without hanging. Open a terminal and run the following command to manually trigger the daily cron job: sudo run-parts /etc/cron.daily Monitor the output to ensure that the APT script executes without any issues and completes successfully. If the script hangs again, further investigation may be required. ","date":"03-05-2015","objectID":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/:3:0","tags":["linux","apt","cron"],"title":"Troubleshooting a Hanging Cron Job in Ubuntu's APT Script","uri":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/#step-3-testing-the-modified-cron-job"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion By modifying the RandomSleep value in the APT cron job script, you can eliminate the random delay that may cause the script to hang. This ensures that the APT script runs smoothly and completes its tasks without any interruptions. Remember to periodically check for system updates manually or configure an appropriate schedule for the APT cron job to keep your Ubuntu system up to date. Note: It\u0026rsquo;s important to exercise caution when modifying system scripts. Make sure you understand the implications of the changes and backup any critical files before proceeding. ","date":"03-05-2015","objectID":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/:4:0","tags":["linux","apt","cron"],"title":"Troubleshooting a Hanging Cron Job in Ubuntu's APT Script","uri":"/posts/development/troubleshooting-a-hanging-cron-job-in-ubuntu-s-apt-script/#conclusion"},{"categories":["Development"],"collections":null,"content":"Introduction In this article, we will create a simple script and a cron job to automatically check the connection status of your wireless network (Wi-Fi) and restart the Network Manager service if the connection is down. This can be particularly useful to ensure that your network remains stable and connected, especially in situations where the Wi-Fi connection tends to drop or become unreliable. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:0","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#introduction"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, ensure that you have: Root Access: You need root or sudo access to create and modify system files and services. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:1","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a Cron Job First, let\u0026rsquo;s create a cron job that will periodically run the connection check script. sudo vim /etc/cron.d/checkconnection Add the following line to the file: * * * * * root /usr/sbin/checkconnection This line schedules the checkconnection script to run every minute. You can adjust the timing according to your preference by modifying the * * * * * part. The format is minute hour day month day-of-week. Save the file and exit your text editor. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:2","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#step-1-create-a-cron-job"},{"categories":["Development"],"collections":null,"content":"Step 2: Create the Connection Check Script Now, let\u0026rsquo;s create the checkconnection script. sudo vim /usr/sbin/checkconnection Add the following content to the script: #!/bin/bash if /sbin/iwconfig wlan0 | grep -o \u0026#34;Access Point: Not-Associated\u0026#34; then sudo service network-manager restart echo \u0026#34;Network Manager Restarted!\u0026#34; fi Here\u0026rsquo;s what this script does: It checks the status of the wireless network interface (wlan0) using iwconfig. If the \u0026ldquo;Access Point: Not-Associated\u0026rdquo; message is found (indicating that the Wi-Fi is not connected to an access point), it restarts the Network Manager service. It also prints a message to the console to indicate that Network Manager has been restarted. Save the file and exit your text editor. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:3","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#step-2-create-the-connection-check-script"},{"categories":["Development"],"collections":null,"content":"Step 3: Make the Script Executable Before the script can be executed, make it executable using the following command: sudo chmod +x /usr/sbin/checkconnection This command grants execute permission to the script. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:4","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#step-3-make-the-script-executable"},{"categories":["Development"],"collections":null,"content":"Conclusion With the cron job and the connection check script in place, your system will automatically monitor the Wi-Fi connection and restart the Network Manager if it detects that the connection is down. This ensures a more stable network connection and can be especially helpful in situations where you rely on a wireless network that occasionally drops out. Remember to adjust the cron schedule according to your needs. For more frequent checks, you can decrease the time interval, and for less frequent checks, increase it. ","date":"03-05-2015","objectID":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/:1:5","tags":null,"title":"Auto Check Connection And Restart Network Manager If Down","uri":"/posts/development/auto-check-connection-and-restart-network-manager-if-down/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re encountering Dynclient timeout issues with DDClient and want to set up email notifications for failures, you can follow these steps to resolve the problem: ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:0:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Open DDClient Configuration File Use a text editor to open the DDClient configuration file. You can use the vim editor as you mentioned in your command: sudo vim /etc/ddclient/ddclient.conf ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:1:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-1-open-ddclient-configuration-file"},{"categories":["Development"],"collections":null,"content":"Step 2: Modify the Configuration Inside the ddclient.conf file, you\u0026rsquo;ll need to add or replace the web parameter to point to the correct URL for obtaining your external IP address. This can be done as follows: web=dynamicdns.park-your-domain.com/getip Make sure this line is correctly set with the URL for your dynamic DNS service. Replace dynamicdns.park-your-domain.com/getip with the actual URL provided by your DNS service provider. ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:2:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-2-modify-the-configuration"},{"categories":["Development"],"collections":null,"content":"Step 3: Save and Exit Save your changes and exit the text editor. In vim, you can do this by pressing Esc, then typing :wq and pressing Enter. ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:3:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-3-save-and-exit"},{"categories":["Development"],"collections":null,"content":"Step 4: Configure Email Notifications To set up email notifications for DDClient failures, you\u0026rsquo;ll need to edit the configuration file for your email notifications. Depending on your Linux distribution and email setup, this can vary. Below is a general example using the ssmtp email service: ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:4:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-4-configure-email-notifications"},{"categories":["Development"],"collections":null,"content":"Install ssmtp (if not already installed) sudo apt-get install ssmtp ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:4:1","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#install-ssmtp-if-not-already-installed"},{"categories":["Development"],"collections":null,"content":"Edit the ssmtp configuration sudo vim /etc/ssmtp/ssmtp.conf Add the following lines to configure the email settings: root=postmaster mailhub=your-mail-server.com:port AuthUser=your-email@gmail.com AuthPass=your-email-password UseTLS=YES UseSTARTTLS=YES Replace the placeholders with your actual email server information and credentials. ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:4:2","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#edit-the-ssmtp-configuration"},{"categories":["Development"],"collections":null,"content":"Step 5: Configure DDClient to Send Email on Failure Now, you\u0026rsquo;ll need to configure DDClient to send an email when it encounters a failure. Open the DDClient configuration again: sudo vim /etc/ddclient/ddclient.conf Add the following lines to the ddclient.conf file to specify the email notification settings: failuremail=username@example.com Replace username@example.com with the email address where you want to receive failure notifications. ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:5:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-5-configure-ddclient-to-send-email-on-failure"},{"categories":["Development"],"collections":null,"content":"Step 6: Restart DDClient After making these changes, you should restart DDClient to apply the new configuration: sudo service ddclient restart Now, DDClient should use the correct URL for obtaining your external IP address and send email notifications in case of failures. Make sure to periodically check your email for notifications to stay informed about any issues with your dynamic DNS updates. ","date":"03-05-2015","objectID":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/:6:0","tags":null,"title":"How To Resolve Dynclient Timeout In DDClient And Email Failure Notifications","uri":"/posts/development/how-to-resolve-dynclient-timeout-in-ddclient-and-email-failure-notifications/#step-6-restart-ddclient"},{"categories":["Development"],"collections":null,"content":"It looks like you\u0026rsquo;re configuring some iptables rules for routing and forwarding traffic between two networks with specific IP address ranges. These rules are designed to allow traffic to flow between the \u0026ldquo;wi.red.net.work\u0026rdquo; network and the \u0026ldquo;wire.less.net.work\u0026rdquo; network through two interfaces, \u0026ldquo;eth0\u0026rdquo; and \u0026ldquo;wlan0.\u0026rdquo; Here\u0026rsquo;s a breakdown of the rules you\u0026rsquo;ve provided: The first rule: iptables -I FORWARD -i eth0 -o wlan0 -s wi.red.net.work/24 -d wire.less.net.work/24 -j ACCEPTThis rule allows traffic coming from the \u0026ldquo;wi.red.net.work\u0026rdquo; network (source) going to the \u0026ldquo;wire.less.net.work\u0026rdquo; network (destination) to be forwarded from the \u0026ldquo;eth0\u0026rdquo; interface to the \u0026ldquo;wlan0\u0026rdquo; interface. The -j ACCEPT part at the end indicates that this traffic should be accepted and forwarded. The second rule: iptables -I FORWARD -i wlan0 -o eth0 -s wire.less.net.work/24 -d wi.red.net.work/24 -j ACCEPTThis rule allows traffic coming from the \u0026ldquo;wire.less.net.work\u0026rdquo; network (source) going to the \u0026ldquo;wi.red.net.work\u0026rdquo; network (destination) to be forwarded from the \u0026ldquo;wlan0\u0026rdquo; interface to the \u0026ldquo;eth0\u0026rdquo; interface. Like the first rule, -j ACCEPT is used to accept and forward this traffic. These rules are commonly used in a Linux firewall configuration to allow traffic to pass through the system from one network to another while ensuring that only specific traffic, defined by the source and destination IP addresses and interfaces, is permitted. Be sure to adjust the IP addresses and interfaces to match your specific network configuration. ","date":"03-05-2015","objectID":"/posts/development/routing-forward-ip/:0:0","tags":null,"title":"Routing Forward IP","uri":"/posts/development/routing-forward-ip/#"},{"categories":["Development"],"collections":null,"content":"Published on August 11, 2014 In this simple tutorial, we will guide you on how to map a network drive, using a Windows share as an example, onto Ubuntu 14.04 LTS with read and write permissions permanently. We will be performing all actions in a terminal window. If you\u0026rsquo;re not familiar with Linux commands, don\u0026rsquo;t worry; just paste the provided commands into the terminal and hit enter to execute them. We will also include screenshots to make the process clearer. ","date":"02-05-2015","objectID":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/:0:0","tags":null,"title":"How to Map a Network Drive onto Ubuntu 14.04 Permanently","uri":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/#"},{"categories":["Development"],"collections":null,"content":"Preparation: Before we can start mounting using cifs, we need to perform some preliminary actions. Open a terminal by pressing Ctrl+Alt+T on your keyboard. Paste the following command to create a mount point (you can replace \u0026lsquo;Ji-share\u0026rsquo; with your preferred name): sudo mkdir /media/Ji-share Install cifs-utils, which provides support for cross-platform file sharing with Microsoft Windows, OS X, and other Unix systems. You can install it from the Ubuntu Software Center or by running the following command: sudo apt-get install cifs-utils Edit the /etc/nsswitch.conf file: sudo gedit /etc/nsswitch.conf Find the line that looks like: hosts: files mdns4_minimal [NOTFOUND=return] dnsChange it to: hosts: files mdns4_minimal [NOTFOUND=return] wins dns Run the following command to allow your Ubuntu system to resolve Windows computer names on a DHCP network: sudo apt-get install libnss-winbind winbind After this, reboot your Ubuntu system or restart your network. ","date":"02-05-2015","objectID":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/:1:0","tags":null,"title":"How to Map a Network Drive onto Ubuntu 14.04 Permanently","uri":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/#preparation"},{"categories":["Development"],"collections":null,"content":"Mount (Map) Network Drive: Now, we will edit the fstab file to mount the network share on startup. First, create a backup of the fstab file: sudo cp /etc/fstab /etc/fstab_old If you ever need to restore your backup, you can run: sudo mv /etc/fstab_old /etc/fstab Create a credentials file using the following command: gedit ~/.smbcredentials Inside this file, insert the username and password for accessing the remote share, replacing \u0026ldquo;Ji\u0026rdquo; and \u0026ldquo;741852963\u0026rdquo; with your own credentials. Save the file. username=YourUsername password=YourPassword Run the following command to get your gid and uid, replacing \u0026ldquo;handbook\u0026rdquo; with your username: id YourUsername Now, edit the fstab file by running the command: sudo gedit /etc/fstab Add the following line to the end of the file, replacing the placeholders with your specific information: //192.168.1.5/share /media/Ji-share cifs credentials=/home/YourUsername/.smbcredentials,iocharset=utf8,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0 Finally, run the following command in the terminal to mount the network share: sudo mount -a This will map the network share, and you will be able to access it in the Unity Launcher and Nautilus file browser. That\u0026rsquo;s it! You have successfully mapped a network drive onto Ubuntu 14.04 LTS with permanent read and write permissions. ","date":"02-05-2015","objectID":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/:2:0","tags":null,"title":"How to Map a Network Drive onto Ubuntu 14.04 Permanently","uri":"/posts/development/how-to-map-a-network-drive-onto-ubuntu-1404-permanently/#mount-map-network-drive"},{"categories":["Development"],"collections":null,"content":"When it comes to sending email notifications from a server, it\u0026rsquo;s important to convey information in a clear and recognizable manner. In this article, we\u0026rsquo;ll explore how to send a Postfix email with a custom form name using the -r option. This will allow us to define a sender name that helps recipients easily identify the source of the email. We\u0026rsquo;ll go through the process step by step, explaining each component and its significance. ","date":"02-05-2015","objectID":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/:0:0","tags":null,"title":"How to Send Postfix Email with a Custom Form Name using the -r Option","uri":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before we begin, ensure you have the following: A Unix-like operating system (e.g., Linux) Postfix installed and configured on your server Basic command-line knowledge ","date":"02-05-2015","objectID":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/:1:0","tags":null,"title":"How to Send Postfix Email with a Custom Form Name using the -r Option","uri":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/#prerequisites"},{"categories":["Development"],"collections":null,"content":"The Command Explained The command you provided uses the echo command to generate the content of the email body. It includes information such as the IP address ($IP), hostname ($HOSTNAME), and current date and time ($NOW). The mailx command is then used to send the email. Here\u0026rsquo;s the breakdown of the command: echo \u0026#39;Someone from \u0026#39;$IP\u0026#39; logged into \u0026#39;$HOSTNAME\u0026#39; on \u0026#39;$NOW\u0026#39;.\u0026#39; | mailx -s -r \u0026#39;EXAMPLE-PC\u0026#39; \u0026#39;SSH Login Notification\u0026#39; john@example.com echo '...': This part of the command generates the content of the email body. It uses the variables $IP, $HOSTNAME, and $NOW to incorporate the relevant information into the message. mailx -s -r 'EXAMPLE-PC' 'SSH Login Notification' john@example.com: mailx: The command-line utility used to send emails. -s: Specifies the subject of the email. -r 'EXAMPLE-PC': This is where the customization takes place. The -r flag is used to set the sender\u0026rsquo;s name. In this case, 'EXAMPLE-PC' is the custom sender name you want to use. 'SSH Login Notification': The subject of the email. john@example.com: The recipient\u0026rsquo;s email address. ","date":"02-05-2015","objectID":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/:2:0","tags":null,"title":"How to Send Postfix Email with a Custom Form Name using the -r Option","uri":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/#the-command-explained"},{"categories":["Development"],"collections":null,"content":"Converting to Postfix-Compatible Format To ensure the command is compatible with the Postfix email system, make sure you format it correctly. The -r flag might not be recognized by all versions of mailx. As an alternative, you can use the -a flag to add a \u0026ldquo;From\u0026rdquo; header with the desired sender name. Here\u0026rsquo;s the adjusted command: echo \u0026#39;Someone from \u0026#39;$IP\u0026#39; logged into \u0026#39;$HOSTNAME\u0026#39; on \u0026#39;$NOW\u0026#39;.\u0026#39; | mailx -s \u0026#39;SSH Login Notification\u0026#39; -a \u0026#39;From: EXAMPLE-PC\u0026#39; john@example.com In this version of the command, we use the -a flag to add a custom \u0026ldquo;From\u0026rdquo; header with the sender name 'EXAMPLE-PC'. ","date":"02-05-2015","objectID":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/:3:0","tags":null,"title":"How to Send Postfix Email with a Custom Form Name using the -r Option","uri":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/#converting-to-postfix-compatible-format"},{"categories":["Development"],"collections":null,"content":"Conclusion By using the -r or -a option in the mailx command, you can customize the sender name for the email notifications you send from your server. This small adjustment can greatly enhance the clarity and recognition of your email communications. ","date":"02-05-2015","objectID":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/:4:0","tags":null,"title":"How to Send Postfix Email with a Custom Form Name using the -r Option","uri":"/posts/development/how-to-send-postfix-email-with-a-custom-form-name-using-the-r-option/#conclusion"},{"categories":["Development"],"collections":null,"content":"Sharing folders using Samba on a Unity desktop environment is a convenient way to enable file sharing between Linux and Windows systems. Samba is an open-source software suite that provides seamless integration between Linux/Unix servers and Windows clients. Here\u0026rsquo;s a step-by-step guide on how to share a folder using Samba in the Unity desktop environment: ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:0","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Samba If you haven\u0026rsquo;t already, you need to install Samba on your system. Open a terminal and run the following command to install Samba: sudo apt-get install samba ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:1","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#step-1-install-samba"},{"categories":["Development"],"collections":null,"content":"Step 2: Configure the Samba Share Create the Shared Folder: Right-click on the folder you want to share in the Unity file manager and select \u0026ldquo;Local Share\u0026rdquo;. This will open the folder\u0026rsquo;s properties dialog. Sharing Options: In the properties dialog, navigate to the \u0026ldquo;Local Network Share\u0026rdquo; tab or similar. Here you can configure the sharing options for the folder. Share Name: Give your share a name that will be visible to other devices on the network. Guest Access: You can choose to allow guest access or require a username and password. Permissions: Configure the permissions for the shared folder, specifying who can read or write to it. ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:2","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#step-2-configure-the-samba-share"},{"categories":["Development"],"collections":null,"content":"Step 3: Add Samba User Create a Samba User: To allow access to the shared folder, you need to create a Samba user. In the terminal, use the following command: sudo smbpasswd -a username Replace \u0026ldquo;username\u0026rdquo; with the actual username you want to use for Samba access. Set Password: You will be prompted to set a password for the Samba user. This password will be used when accessing the shared folder from other devices. ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:3","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#step-3-add-samba-user"},{"categories":["Development"],"collections":null,"content":"Step 4: Restart Samba Service After making these configurations, it\u0026rsquo;s recommended to restart the Samba service to apply the changes. Run the following command in the terminal: sudo service smbd restart ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:4","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#step-4-restart-samba-service"},{"categories":["Development"],"collections":null,"content":"Accessing the Shared Folder from Other Devices Once the folder is shared using Samba, you can access it from other devices on the same network. On Windows, you can use the file explorer and enter the path to the shared folder (e.g., \\\\\u0026lt;Linux-IP\u0026gt;\\sharename). You\u0026rsquo;ll be prompted to enter the Samba username and password you created earlier. Remember that Samba configuration can vary based on your specific system and network setup. These steps should provide a general guideline for sharing a folder using Samba in the Unity desktop environment. ","date":"02-05-2015","objectID":"/posts/development/sharing-a-folder-with-samba-on-unity/:0:5","tags":null,"title":"Sharing a Folder with Samba on Unity","uri":"/posts/development/sharing-a-folder-with-samba-on-unity/#accessing-the-shared-folder-from-other-devices"},{"categories":["Development"],"collections":null,"content":"You can enhance your SSH experience by automatically starting a screen or byobu session when you log in via SSH. This can help you maintain your sessions, especially when working on remote servers. Here\u0026rsquo;s how to set it up: Edit Your ~/.bashrc File: Open your ~/.bashrc file for editing using your preferred text editor. You can use a command like nano ~/.bashrc or vim ~/.bashrc. Add the following code snippet to the end of your ~/.bashrc file: #====================================================================== # Auto-screen invocation. see: http://taint.org/wk/RemoteLoginAutoScreen # if we\u0026#39;re coming from a remote SSH connection, in an interactive session # then automatically put us into a screen(1) or byobu session. Only try once # -- if $STARTED_SCREEN is set, don\u0026#39;t try it again, to avoid looping # if screen or byobu fails for some reason. if [ \u0026#34;$PS1\u0026#34; != \u0026#34;\u0026#34; -a \u0026#34;${STARTED_SCREEN:-x}\u0026#34; = x -a \u0026#34;${SSH_TTY:-x}\u0026#34; != x ] then STARTED_SCREEN=1 ; export STARTED_SCREEN [ -d $HOME/lib/screen-logs ] || mkdir -p $HOME/lib/screen-logs sleep 1 # Try starting byobu first, if available if command -v byobu \u0026amp;\u0026gt;/dev/null; then byobu \u0026amp;\u0026amp; clear \u0026amp;\u0026amp; exit 0 fi # If byobu is not available, try starting screen screen -x \u0026amp;\u0026amp; clear \u0026amp;\u0026amp; exit 0 if [ \u0026#34;$?\u0026#34; != \u0026#34;0\u0026#34; ]; then screen \u0026amp;\u0026amp; clear \u0026amp;\u0026amp; exit 0 fi # Normally, execution of this rc script ends here... echo \u0026#34;Screen or byobu failed! Continuing with normal bash startup.\u0026#34; fi # [end of auto-screen snippet] # ====================================================================== Save and Exit: After adding the code, save the changes to your ~/.bashrc file and exit the text editor. Apply Changes: To apply the changes to your current session without logging out, you can run the following command: source ~/.bashrc Using Screen or Byobu: When you SSH into your server in the future, it will automatically start a screen or byobu session if it\u0026rsquo;s an interactive SSH session. To disconnect from your SSH session while inside screen or byobu, use the following key combination: Ctrl-a followed by d. This will detach your session, allowing you to reattach to it later. Now, every time you SSH into your server, you will be placed into a screen or byobu session automatically, providing you with a more resilient and flexible terminal environment. ","date":"20-04-2015","objectID":"/posts/development/automatic-screen-for-ssh-login/:0:0","tags":null,"title":"Automatic Screen For SSH Login","uri":"/posts/development/automatic-screen-for-ssh-login/#"},{"categories":["Development"],"collections":null,"content":"When you need to modify file permissions recursively in a directory, you can use the chmod command along with the find command in Linux. Here are some commonly used commands to give different permissions to directories and files: ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:0:0","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#"},{"categories":["Development"],"collections":null,"content":"Recursively Give Directories Read \u0026amp; Execute Privileges To recursively give directories read and execute privileges (755), you can use the following command: find /path/to/base/dir -type d -exec chmod 755 {} + This command finds all directories under /path/to/base/dir and sets their permissions to 755, which allows read and execute access for the owner, group, and others. Alternatively, if there are many objects to process and you want to avoid spawning multiple chmod processes, you can use this command: chmod 755 $(find /path/to/base/dir -type d) This command finds all directories and sets their permissions in a single chmod operation. ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:1:0","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#recursively-give-directories-read--execute-privileges"},{"categories":["Development"],"collections":null,"content":"Recursively Give Files Read Privileges To recursively give files read privileges (644), you can use the following command: find /path/to/base/dir -type f -exec chmod 644 {} + This command finds all files under /path/to/base/dir and sets their permissions to 644, which allows read access for the owner and read-only access for the group and others. Again, if you have many files to process and want to optimize the command, you can use: chmod 644 $(find /path/to/base/dir -type f) This command finds all files and sets their permissions in a single chmod operation. ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:2:0","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#recursively-give-files-read-privileges"},{"categories":["Development"],"collections":null,"content":"Alternative Approach to Reduce chmod Spawning If you want to further reduce the spawning of chmod processes and handle filenames with spaces or special characters correctly, you can use the xargs command with the -0 option, which handles null-terminated strings. Here\u0026rsquo;s how to do it: ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:3:0","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#alternative-approach-to-reduce-chmod-spawning"},{"categories":["Development"],"collections":null,"content":"For Directories find /path/to/base/dir -type d -print0 | xargs -0 chmod 755 ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:3:1","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#for-directories"},{"categories":["Development"],"collections":null,"content":"For Files find /path/to/base/dir -type f -print0 | xargs -0 chmod 644 This approach is more efficient when dealing with a large number of files or directories and ensures that filenames with spaces or special characters are handled correctly. ","date":"18-04-2015","objectID":"/posts/development/chmod-directories-or-files-only/:3:2","tags":null,"title":"CHMOD Directories Or Files Only","uri":"/posts/development/chmod-directories-or-files-only/#for-files"},{"categories":["Development"],"collections":null,"content":"Servers are not immune to security threats, especially when it comes to SSH (Secure Shell) access. Brute force attacks can compromise your server\u0026rsquo;s security. Fail2Ban is a tool that automatically defends your virtual private server (VPS) by monitoring log files and responding to malicious behavior. In this guide, we\u0026rsquo;ll walk you through setting up Fail2Ban on Ubuntu 12.04 to protect your SSH access. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:0:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Install Fail2Ban First, you need to install Fail2Ban using apt-get: ```bash sudo apt-get install fail2ban ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:1:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#step-1-install-fail2ban"},{"categories":["Development"],"collections":null,"content":"Step 2: Copy the Configuration File The default Fail2Ban configuration file is located at /etc/fail2ban/jail.conf, but you should not make changes directly to this file. Instead, create a local copy: sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local You will configure Fail2Ban in the jail.local file. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:2:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#step-2-copy-the-configuration-file"},{"categories":["Development"],"collections":null,"content":"Step 3: Configure Defaults in jail.local Open the jail.local configuration file: sudo nano /etc/fail2ban/jail.local In this file, you can customize the default settings. Here\u0026rsquo;s an example of the [DEFAULT] section: ```ini [DEFAULT] ignoreip = 127.0.0.1/8 bantime = 600 maxretry = 3 backend = auto destemail = root@localhost ignoreip: Add your IP address to this line to whitelist it, ensuring you don\u0026rsquo;t accidentally ban yourself. bantime: Set the ban duration in seconds (default is 10 minutes). maxretry: Specify the number of incorrect login attempts before an IP is banned. backend: Leave as \u0026lsquo;auto\u0026rsquo;. destemail: Set the email address to receive alerts if Fail2Ban bans an IP. You can adjust these values to suit your preferences. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:3:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#step-3-configure-defaults-in-jaillocal"},{"categories":["Development"],"collections":null,"content":"Additional Details - Actions Below the defaults, you\u0026rsquo;ll find the Actions section. Here\u0026rsquo;s a snippet: ```ini # ACTIONS banaction = iptables-multiport mta = sendmail protocol = tcp banaction: Describes the steps Fail2Ban takes to ban an IP. The default is iptables-multiport. mta: Specifies the email program Fail2Ban uses for alerts (default is sendmail). protocol: You can change this to udp if you want Fail2Ban to monitor UDP instead of TCP. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:4:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#additional-details---actions"},{"categories":["Development"],"collections":null,"content":"Step 4 (Optional): Configure the SSH Section in jail.local The SSH section is further down in the jail.local file and is enabled by default. Here\u0026rsquo;s an example: ```ini [ssh] enabled = true port = ssh filter = sshd logpath = /var/log/auth.log maxretry = 6 enabled: Set to true to enable SSH protection. Change to false to disable it. port: Specify the SSH port (default is ssh). Change it if you use a non-standard port. filter: Refers to the rules used to find matches (default is sshd). logpath: Set the log location Fail2Ban should monitor. maxretry: Define the maximum allowed login attempts before banning an IP. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:5:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#step-4-optional-configure-the-ssh-section-in-jaillocal"},{"categories":["Development"],"collections":null,"content":"Step 5: Restart Fail2Ban After making changes, restart Fail2Ban to apply the configuration: sudo service fail2ban restart You can view the active Fail2Ban rules in the IP table: sudo iptables -L By following these steps, you\u0026rsquo;ve enhanced the security of your Ubuntu 12.04 server by protecting SSH access with Fail2Ban. This helps safeguard your server against brute force attacks and malicious behavior. ","date":"18-04-2015","objectID":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/:6:0","tags":null,"title":"How To Protect SSH With Fail2Ban On Ubuntu 12.04","uri":"/posts/development/how-to-protect-ssh-with-fail2ban-on-ubuntu-1204/#step-5-restart-fail2ban"},{"categories":["Development"],"collections":null,"content":"It\u0026rsquo;s essential to exercise caution when terminating user sessions or SSH sessions, as abruptly killing processes can lead to data loss or corruption. However, if you need to forcefully terminate a user\u0026rsquo;s session and their associated SSH session, you can follow the steps you\u0026rsquo;ve outlined. Below is a more detailed explanation of the commands you\u0026rsquo;ve mentioned: 1. Terminate the User\u0026rsquo;s Sessions: To terminate a specific user\u0026rsquo;s sessions, you can use the pkill command followed by the -u option with the username. sudo pkill -u \u0026lt;username\u0026gt; Replace \u0026lt;username\u0026gt; with the actual username of the user whose sessions you want to terminate. This command will send a signal to all processes owned by that user, effectively logging them out. 2. Forcefully Terminate the User\u0026rsquo;s Sessions (if needed): In most cases, the first command should be sufficient to log the user out gracefully. However, if some processes refuse to terminate, you can use the following command to forcefully kill them: sudo pkill -KILL -u \u0026lt;username\u0026gt; This sends a stronger signal (SIGKILL) to forcefully terminate any remaining processes owned by the user. 3. Be Cautious: Please be cautious when using these commands, especially on production systems or with important user sessions. Sudden termination of processes can result in data loss or system instability. 4. Script to Prompt for Username: If you want to create a script to make this process more user-friendly, you can use a simple Bash script. Here\u0026rsquo;s an example: #!/bin/bash read -p \u0026#34;Enter the username: \u0026#34; username sudo pkill -u $username sudo pkill -KILL -u $username Save this script to a file, make it executable with chmod +x script.sh, and run it. It will prompt you to enter the username and then execute the commands to terminate the user\u0026rsquo;s sessions. Remember to use these commands responsibly and only when necessary, as forcibly terminating user sessions should be a last resort. ","date":"18-04-2015","objectID":"/posts/development/kill-user-session-kill-ssh-session-too/:0:0","tags":null,"title":"Kill User Session (Kill SSH Session Too)","uri":"/posts/development/kill-user-session-kill-ssh-session-too/#"},{"categories":["Development"],"collections":null,"content":"Author: NIXCRAFT Published Date: October 16, 2006 Last Updated: October 16, 2006 Category: HOWTO, TIPS, TROUBLESHOOTING If you\u0026rsquo;ve experienced your OpenSSH server connection dropping out after a few minutes or a specific period of inactivity, don\u0026rsquo;t worry; it\u0026rsquo;s not a bug but rather a security feature. This behavior is usually due to a packet filter or NAT (Network Address Translation) device timing out your TCP connection as a security measure. This issue typically occurs when using SSH protocol version 2. To resolve this problem and prevent your SSH connection from being terminated after a period of inactivity, follow these steps: ","date":"18-04-2015","objectID":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/:0:0","tags":null,"title":"Open SSH Server Connection Drops Out After Few or N Minutes of Inactivity","uri":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/#"},{"categories":["Development"],"collections":null,"content":"Method 1: Adjust SSH Server Configuration Open your SSH server configuration file for editing: vi /etc/ssh/sshd_config Modify the following settings: ClientAliveInterval: This sets a timeout interval in seconds (e.g., 30) after which, if no data has been received from the client, the SSH server (sshd) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client. This option applies to protocol version 2 only. ClientAliveCountMax: This sets the number of client alive messages (e.g., 5) that may be sent without sshd receiving any messages back from the client. If this threshold is reached while client alive messages are being sent, sshd will disconnect the client, terminating the session. Example configuration: ClientAliveInterval 30 ClientAliveCountMax 5 Save and close the file. Restart the SSH server to apply the changes: /etc/init.d/ssh restart OR service sshd restart ","date":"18-04-2015","objectID":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/:0:1","tags":null,"title":"Open SSH Server Connection Drops Out After Few or N Minutes of Inactivity","uri":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/#method-1-adjust-ssh-server-configuration"},{"categories":["Development"],"collections":null,"content":"Method 2: Adjust SSH Client Configuration Alternatively, you can make adjustments on the client side (your workstation) by enabling the ServerAliveInterval option in the SSH client\u0026rsquo;s configuration file. Open the SSH client\u0026rsquo;s configuration file for editing: vi /etc/ssh/ssh_config Append or modify the following values: ServerAliveInterval: This sets a timeout interval in seconds. If no data has been received from the server within this interval, SSH will send a message through the encrypted channel to request a response from the server. ServerAliveCountMax: This sets the maximum number of server alive messages that can be sent without receiving a response from the server before SSH disconnects. Example configuration: ServerAliveInterval 15 ServerAliveCountMax 3 Save and close the file. With this configuration, if the server becomes unresponsive, SSH will disconnect after approximately 45 seconds. Remember that this option applies to protocol version 2 only. For more information and additional configuration options, refer to the man pages of ssh, sshd, and sshd_config/ssh_config. Please note that the article was originally published on October 16, 2006. While the information provided here is still relevant, it\u0026rsquo;s essential to consider any updates or changes in SSH configuration options that may have occurred since then. ","date":"18-04-2015","objectID":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/:0:2","tags":null,"title":"Open SSH Server Connection Drops Out After Few or N Minutes of Inactivity","uri":"/posts/development/open-ssh-server-connection-drops-out-after-few-or-n-minutes-of-inactivity/#method-2-adjust-ssh-client-configuration"},{"categories":["Development"],"collections":null,"content":"Securing your OwnCloud server is crucial to protect your data from potential threats. In this guide, we will focus on two key aspects: automatically installing security updates and preventing brute-force password hacking attempts. ","date":"18-04-2015","objectID":"/posts/development/secure-your-owncloud-server/:0:0","tags":null,"title":"Secure Your OwnCloud Server","uri":"/posts/development/secure-your-owncloud-server/#"},{"categories":["Development"],"collections":null,"content":"Automatically Install Security Updates No software package is flawless, and security vulnerabilities may exist in your server\u0026rsquo;s software stack, from the Linux kernel to the SSL library. However, many of these vulnerabilities have patches available, and the primary reason they get exploited is due to delayed or neglected security updates. To automatically install security updates on Debian-based distributions, follow these steps: Open your terminal and run the following command: sudo dpkg-reconfigure -plow unattended-upgrades This command will configure your system to automatically install security updates. Now, let\u0026rsquo;s move on to preventing brute-force password hacking attempts. ","date":"18-04-2015","objectID":"/posts/development/secure-your-owncloud-server/:1:0","tags":null,"title":"Secure Your OwnCloud Server","uri":"/posts/development/secure-your-owncloud-server/#automatically-install-security-updates"},{"categories":["Development"],"collections":null,"content":"Prevent Brute-Force Password Hacks By default, OwnCloud 8 is susceptible to brute-force password attacks, as it doesn\u0026rsquo;t enforce timeouts after failed login attempts. To mitigate this risk, we can use Fail2ban to impose timeouts after a certain number of failed login attempts. Here\u0026rsquo;s how to set it up: Install Fail2ban by running: sudo apt-get install fail2ban Configure OwnCloud to log failed login attempts by editing the OwnCloud configuration file. Use a text editor to open the file /var/www/owncloud/config/config.php. Replace \u0026lt;TIMEZONE\u0026gt; with your server\u0026rsquo;s timezone. Ensure that the webserver user (e.g., www-data) has write access to the log file. \u0026#39;logtimezone\u0026#39; =\u0026gt; \u0026#39;_\u0026lt;TIMEZONE\u0026gt;_\u0026#39;, // e.g. \u0026#39;Europe/Berlin\u0026#39; \u0026#39;logfile\u0026#39; =\u0026gt; \u0026#39;/var/log/owncloud.log\u0026#39;, \u0026#39;loglevel\u0026#39; =\u0026gt; \u0026#39;2\u0026#39;, \u0026#39;log_authfailip\u0026#39; =\u0026gt; true, // not needed for 7.0.1+ To verify that logging works, attempt some failed logins, and then check the log file /var/log/owncloud.log. Create a Fail2ban filter definition for OwnCloud by creating the file /etc/fail2ban/filter.d/owncloud.conf with the following content: [Definition] failregex={\u0026#34;app\u0026#34;:\u0026#34;core\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;Login failed: user \u0026#39;.*\u0026#39; , wrong password, IP:\u0026lt;HOST\u0026gt;\u0026#34;,\u0026#34;level\u0026#34;:2,\u0026#34;time\u0026#34;:\u0026#34;.*\u0026#34;} {\u0026#34;app\u0026#34;:\u0026#34;core\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;Login failed: \u0026#39;.*\u0026#39; \\(Remote IP: \u0026#39;\u0026lt;HOST\u0026gt;\u0026#39;, X-Forwarded-For: \u0026#39;.*\u0026#39;\\)\u0026#34;,\u0026#34;level\u0026#34;:2,\u0026#34;time\u0026#34;:\u0026#34;.*\u0026#34;} {\u0026#34;reqId\u0026#34;:\u0026#34;.*\u0026#34;,\u0026#34;remoteAddr\u0026#34;:\u0026#34;\u0026lt;HOST\u0026gt;\u0026#34;,\u0026#34;app\u0026#34;:\u0026#34;core\u0026#34;,\u0026#34;message\u0026#34;:\u0026#34;Login failed: .*\u0026#34;,\u0026#34;level\u0026#34;:2,\u0026#34;time\u0026#34;:\u0026#34;.*\u0026#34;} Choose the appropriate failregex line based on your OwnCloud version. Create a Fail2ban service definition by opening the file /etc/fail2ban/jail.config and adding the following: [owncloud] enabled = true filter = owncloud port = https logpath = /var/log/owncloud.log Restart Fail2ban to apply the changes: sudo systemctl restart fail2ban To test if Fail2ban is correctly reading the log, try logging in with the wrong password four times. The fourth attempt should result in a timeout (for 15 minutes). By following these steps, you can automatically install security updates and enhance the security of your OwnCloud server by protecting it against brute-force password hacking attempts using Fail2ban. ","date":"18-04-2015","objectID":"/posts/development/secure-your-owncloud-server/:2:0","tags":null,"title":"Secure Your OwnCloud Server","uri":"/posts/development/secure-your-owncloud-server/#prevent-brute-force-password-hacks"},{"categories":["Development"],"collections":null,"content":"File permissions in Ubuntu and other Unix-like operating systems are crucial for controlling access to files and directories. They determine who can read, write, or execute a file or directory. You can use the ls command with the -l option to display detailed information about file permissions. Here\u0026rsquo;s what each part of the output means: ls -l /path/to/file -rwxr-xr-x 1 10490 floppy 17242 May 8 2013 acroread The first character - represents the type of object it is. Here, it\u0026rsquo;s a regular file. Other possible values include: d: Directory c: Character device l: Symbolic link p: Named pipe (FIFO) s: Socket b: Block device D: Door (door file) -: Regular file The next three characters rwx represent permissions for the owner of the file. Specifically: r: Read permission w: Write permission x: Execute permission The next three characters r-x represent permissions for the group of the file. In this case: r: Read permission -: No write permission x: Execute permission The last three characters r-x represent permissions for others, meaning users who are neither the owner nor in the group: r: Read permission -: No write permission x: Execute permission Additionally, you might see s, S, t, or T in the place of the x permission. These special permissions indicate setuid (s), setgid (S), or the sticky bit (t or T) for the file or directory. ","date":"18-04-2015","objectID":"/posts/development/understanding-file-permissions-in-ubuntu/:0:0","tags":null,"title":"Understanding File Permissions In Ubuntu","uri":"/posts/development/understanding-file-permissions-in-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Octal Notation File permissions can also be represented in octal notation, which is a concise way to express them. In octal notation: Read (r) is represented by 4. Write (w) is represented by 2. Execute (x) is represented by 1. You calculate the octal representation by summing these values for the owner, group, and others\u0026rsquo; permissions. For example, if you see 755 in octal notation, it translates as follows: For the owner, it\u0026rsquo;s 4 (read) + 2 (write) + 1 (execute) = 7. For the group, it\u0026rsquo;s 4 (read) + 0 (no write) + 1 (execute) = 5. For others, it\u0026rsquo;s 4 (read) + 0 (no write) + 1 (execute) = 5. To view file permissions in octal notation, you can use the stat command with a specific format: stat -c \u0026#34;%a %n\u0026#34; /path/of/fileFor example: stat -c \u0026#34;%a %n\u0026#34; acroread 755 acroreadIn this case, you can see that the octal notation 755 corresponds to the file\u0026rsquo;s permissions, as explained above. ","date":"18-04-2015","objectID":"/posts/development/understanding-file-permissions-in-ubuntu/:1:0","tags":null,"title":"Understanding File Permissions In Ubuntu","uri":"/posts/development/understanding-file-permissions-in-ubuntu/#octal-notation"},{"categories":["Development"],"collections":null,"content":"In this guide, we will walk through the steps to install and configure DDClient on Ubuntu 14.04 LTS to work with CloudFlare. DDClient is a dynamic DNS update client that allows you to automatically update your DNS records on CloudFlare when your IP address changes. This can be useful if you are hosting a server on a dynamic IP address. ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:0:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following: A domain registered on CloudFlare, e.g., mycomputer.example.com. Access to your CloudFlare account. A Ubuntu 14.04 LTS server. ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:1:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Create a Domain Entry on CloudFlare Log in to your CloudFlare account. Create a domain entry for your dynamic DNS, e.g., mycomputer.example.com. Note down your CloudFlare login email and API key; you will need these later. ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:2:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-1-create-a-domain-entry-on-cloudflare"},{"categories":["Development"],"collections":null,"content":"Step 2: Install Required Dependencies Open a terminal on your Ubuntu server and install the necessary Perl modules: sudo apt-get update sudo apt-get install perl libjson-any-perl libio-socket-ssl-perl ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:3:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-2-install-required-dependencies"},{"categories":["Development"],"collections":null,"content":"Step 3: Download DDClient Files Download the latest DDClient files from the official project on SourceForge: wget http://downloads.sourceforge.net/project/ddclient/ddclient/ddclient-3.8.2/ddclient-3.8.2.tar.gz ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:4:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-3-download-ddclient-files"},{"categories":["Development"],"collections":null,"content":"Step 4: Extract DDClient Files Extract the downloaded DDClient files: tar -xzf ddclient-3.8.2.tar.gz cd ddclient-3.8.2 ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:5:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-4-extract-ddclient-files"},{"categories":["Development"],"collections":null,"content":"Step 5: Apply the CloudFlare Patch Download the CloudFlare patch file: wget http://blog.peter-r.co.uk/uploads/ddclient-3.8.0-cloudflare-22-6-2014.patch Apply the patch: patch \u0026lt; ddclient-3.8.0-cloudflare-22-6-2014.patch ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:6:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-5-apply-the-cloudflare-patch"},{"categories":["Development"],"collections":null,"content":"Step 6: Manual Installation Create necessary directories and copy files: sudo mkdir /etc/ddclient sudo mkdir /var/cache/ddclient sudo cp ddclient /usr/sbin/ sudo cp sample-etc_ddclient.conf /etc/ddclient/ddclient.conf sudo cp sample-etc_rc.d_init.d_ddclient.ubuntu /etc/init.d/ddclient ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:7:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-6-manual-installation"},{"categories":["Development"],"collections":null,"content":"Step 7: Edit DDClient Configuration Edit the DDClient configuration file to match the following settings: sudo vi /etc/ddclient/ddclient.conf Or: sudo vi /etc/ddclient.conf Ensure your configuration file looks like this (make special note of where commas are placed): daemon=300 syslog=yes mail=root mail-failure=root pid=/var/run/ddclient.pid ssl=yes protocol=cloudflare, use=web server=www.cloudflare.com, zone=example.com, # Replace with your CloudFlare zone name login=your@email.com, # Replace with your CloudFlare login email password=your-api-key-here mycomputer.example.com, # Replace with your dynamic DNS hostname ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:8:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-7-edit-ddclient-configuration"},{"categories":["Development"],"collections":null,"content":"Step 8: Start the DDClient Service Start the DDClient service: sudo service ddclient start To check the logs, use the following command: tail /var/log/syslog ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:9:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-8-start-the-ddclient-service"},{"categories":["Development"],"collections":null,"content":"Step 9: Set DDClient to Run at Startup To ensure DDClient runs at startup, remove any existing links and then add DDClient to the startup sequence: sudo update-rc.d -f ddclient remove sudo update-rc.d ddclient defaults That\u0026rsquo;s it! You\u0026rsquo;ve successfully set up DDClient to work with CloudFlare on your Ubuntu 14.04 LTS server. DDClient will now automatically update your DNS records on CloudFlare whenever your IP address changes. ","date":"17-04-2015","objectID":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/:10:0","tags":null,"title":"Setting Up DDClient With CloudFlare On Ubuntu 14.04 LTS","uri":"/posts/development/setting-up-ddclient-with-cloudflare-on-ubuntu-1404-lts/#step-9-set-ddclient-to-run-at-startup"},{"categories":["Development"],"collections":null,"content":"It\u0026rsquo;s important to be cautious when using the killall command, especially with sudo, as it can terminate processes indiscriminately. Killing SSH connections might disrupt legitimate connections and potentially cause issues. If you need to terminate specific SSH tunneling connections, it\u0026rsquo;s better to identify the process IDs (PIDs) associated with those connections and then use kill with the specific PIDs. Here\u0026rsquo;s a safer way to do it: List SSH Processes: First, list the SSH processes to identify the ones you want to terminate. You can use the ps command with grep to filter SSH processes: ps aux | grep ssh This command will show you a list of SSH processes running on your system along with their PIDs. Identify the PID: Find the PID (Process ID) of the SSH tunneling connection you want to terminate from the list generated by the previous command. Kill the SSH Process: Once you have identified the PID of the SSH tunneling connection you want to terminate, use the kill command with sudo: sudo kill \u0026lt;PID\u0026gt; Replace \u0026lt;PID\u0026gt; with the actual PID of the SSH process you want to terminate. By following these steps, you can selectively terminate SSH tunneling connections without affecting other SSH processes on your system. This approach is more targeted and safer than using killall to kill all SSH processes. ","date":"15-04-2015","objectID":"/posts/development/kill-all-ssh-tunneling-connection/:0:0","tags":null,"title":"Kill All SSH Tunneling Connection","uri":"/posts/development/kill-all-ssh-tunneling-connection/#"},{"categories":["Development"],"collections":null,"content":"User management is an essential part of maintaining a secure and organized Ubuntu system. This guide provides you with various commands and procedures for managing users on your Ubuntu system. ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:0:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#"},{"categories":["Development"],"collections":null,"content":"Listing All Users To list all users on your system, you can use the following command: cut -d: -f1 /etc/passwd ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:1:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#listing-all-users"},{"categories":["Development"],"collections":null,"content":"Adding a New User You can add a new user using either of the following commands: sudo adduser *new_username* or sudo useradd *new_username* For more information on the difference between adduser and useradd, refer to the section What is the difference between adduser and useradd?. ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:2:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#adding-a-new-user"},{"categories":["Development"],"collections":null,"content":"Removing a User To remove or delete a user, follow these steps: Delete the user account: sudo userdel *username* Optionally, you may want to delete the user\u0026rsquo;s home directory: sudo rm -r /home/*username* Please exercise caution when using the rm command, as it will permanently delete the user\u0026rsquo;s files and directories. ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:3:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#removing-a-user"},{"categories":["Development"],"collections":null,"content":"Modifying User Attributes ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:4:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#modifying-user-attributes"},{"categories":["Development"],"collections":null,"content":"Changing the Username To change a user\u0026rsquo;s username, you can use the usermod command: sudo usermod -l *new_username* *old_username* ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:4:1","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#changing-the-username"},{"categories":["Development"],"collections":null,"content":"Changing the Password To change a user\u0026rsquo;s password, use the passwd command: sudo passwd *username* ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:4:2","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#changing-the-password"},{"categories":["Development"],"collections":null,"content":"Changing the Shell To change the default shell for a user, utilize the chsh command: sudo chsh *username* ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:4:3","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#changing-the-shell"},{"categories":["Development"],"collections":null,"content":"Changing User Details To modify a user\u0026rsquo;s details, such as their real name, you can use the chfn command: sudo chfn *username* ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:4:4","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#changing-user-details"},{"categories":["Development"],"collections":null,"content":"Additional Resources For more detailed information and options regarding user management, consult the manual pages for the relevant commands. Use the following commands to access the manual pages: man adduser: Manual for the adduser command. man useradd: Manual for the useradd command. man userdel: Manual for the userdel command. And more: You can explore other user management commands by using the man command followed by the command name. Proper user management is crucial for system security and organization. Be cautious when making changes to user accounts, especially when deleting user data. Always have up-to-date backups and consider the implications of each action. ","date":"13-04-2015","objectID":"/posts/development/managing-users-in-ubuntu/:5:0","tags":null,"title":"Managing Users in Ubuntu","uri":"/posts/development/managing-users-in-ubuntu/#additional-resources"},{"categories":["Development"],"collections":null,"content":"Secure Copy Protocol (SCP) is a command-line tool that allows you to securely copy files and directories between your local machine and a remote server over SSH. Here\u0026rsquo;s how you can use SCP to copy files in both directions: from your local machine to a remote server and from a remote server to your local machine. ","date":"07-04-2015","objectID":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/:0:0","tags":null,"title":"Copying Files between Local and Remote Machines using SCP","uri":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/#"},{"categories":["Development"],"collections":null,"content":"Copying from Local to Remote (Upload) To copy a file from your local machine to a remote server, use the following command: scp /path/to/localfile user@hostname:/path/to/destination/ /path/to/localfile: The path to the file on your local machine that you want to copy. user: Your username on the remote server. hostname: The hostname or IP address of the remote server. /path/to/destination/: The destination directory on the remote server where you want to copy the file. For example, if you want to copy a file named \u0026ldquo;example.txt\u0026rdquo; from your local machine to the home directory of a remote server with the IP address \u0026ldquo;123.45.67.89\u0026rdquo; and the username \u0026ldquo;myuser,\u0026rdquo; you would use: scp example.txt myuser@123.45.67.89:~/ You\u0026rsquo;ll be prompted to enter the password for the remote user, and then the file will be copied. ","date":"07-04-2015","objectID":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/:1:0","tags":null,"title":"Copying Files between Local and Remote Machines using SCP","uri":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/#copying-from-local-to-remote-upload"},{"categories":["Development"],"collections":null,"content":"Copying from Remote to Local (Download) To copy a file from a remote server to your local machine, use this command: scp user@hostname:/path/to/remotefile /path/to/localdestination/ user: Your username on the remote server. hostname: The hostname or IP address of the remote server. /path/to/remotefile: The path to the file on the remote server that you want to copy. /path/to/localdestination/: The local directory where you want to save the copied file. For example, if you want to download a file named \u0026ldquo;example.txt\u0026rdquo; from the remote server to your local home directory, you would use: scp myuser@123.45.67.89:~/example.txt ~/Downloads/ Again, you\u0026rsquo;ll be prompted to enter the remote user\u0026rsquo;s password, and then the file will be copied to your local machine. Remember to replace the placeholders with your actual file paths, usernames, hostnames, and destination paths. SCP is a secure and efficient way to transfer files between local and remote machines over SSH, making it a valuable tool for system administrators and developers. ","date":"07-04-2015","objectID":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/:2:0","tags":null,"title":"Copying Files between Local and Remote Machines using SCP","uri":"/posts/development/copying-files-between-local-and-remote-machines-using-scp/#copying-from-remote-to-local-download"},{"categories":["Development"],"collections":null,"content":"If you want to see detailed boot messages instead of the splash screen when your Ubuntu system starts up, you can enable verbose mode by following these steps: Open a terminal window. Edit the Grub configuration file by running the following command: sudo nano /etc/default/grub This will open the Grub configuration file in the Nano text editor. In the Grub configuration file, look for the line that starts with GRUB_CMDLINE_LINUX_DEFAULT. This line controls the display of the splash screen during boot. To enable the splash screen with condensed text output (the default for the desktop edition), it should look like this: GRUB_CMDLINE_LINUX_DEFAULT=\u0026#34;quiet splash\u0026#34; To enable the traditional text display without the splash screen, remove both \u0026ldquo;quiet\u0026rdquo; and \u0026ldquo;splash\u0026rdquo; so that it looks like this: GRUB_CMDLINE_LINUX_DEFAULT= Choose the option that suits your preference. After making the necessary changes, save the file by pressing Ctrl + O, then press Enter. To exit Nano, press Ctrl + X. Update Grub to apply the changes you made: sudo update-grub Now, when you reboot your Ubuntu system, you will either see detailed boot messages (if you removed \u0026ldquo;quiet\u0026rdquo; and \u0026ldquo;splash\u0026rdquo;) or the splash screen with condensed text output (if you kept \u0026ldquo;quiet\u0026rdquo; and \u0026ldquo;splash\u0026rdquo;). For more information and additional details, you can refer to the official Ubuntu documentation on Grub2: Ubuntu Grub2 Community Documentation. By following these steps, you can customize the boot behavior of your Ubuntu system to either display detailed boot messages or the traditional splash screen. ","date":"07-04-2015","objectID":"/posts/development/how-to-enable-ubuntu-boot-verbose-mode/:0:0","tags":null,"title":"How to Enable Ubuntu Boot Verbose Mode","uri":"/posts/development/how-to-enable-ubuntu-boot-verbose-mode/#"},{"categories":["Development"],"collections":null,"content":"Network Manager is an essential service in many Linux distributions that allows you to manage network connections easily. Sometimes, you might encounter an issue where Network Manager doesn\u0026rsquo;t start automatically at boot. One common reason for this issue is misconfigurations in the Network Manager service file. In this article, we\u0026rsquo;ll guide you on how to ensure Network Manager starts automatically at boot by removing the static-network-up line from the Network Manager service configuration file. ","date":"07-04-2015","objectID":"/posts/development/how-to-make-network-manager-start-automatically/:0:0","tags":null,"title":"How to Make Network Manager Start Automatically","uri":"/posts/development/how-to-make-network-manager-start-automatically/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before proceeding, make sure you have: Administrative (sudo) privileges on your Linux system. A text editor installed. You can use any text editor of your choice, such as Nano or Vim. ","date":"07-04-2015","objectID":"/posts/development/how-to-make-network-manager-start-automatically/:1:0","tags":null,"title":"How to Make Network Manager Start Automatically","uri":"/posts/development/how-to-make-network-manager-start-automatically/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Steps to Remove the static-network-up Line Follow these steps to remove the static-network-up line from the Network Manager service configuration file: Open a terminal window. You can usually access the terminal by pressing Ctrl + Alt + T or searching for \u0026ldquo;Terminal\u0026rdquo; in your system\u0026rsquo;s application menu. Gain administrative privileges by running the following command: sudo su Navigate to the Network Manager service configuration directory by entering: cd /etc/init/ Use a text editor to open the network-manager.conf file. You can use any text editor you prefer; here, we\u0026rsquo;ll use Nano: nano network-manager.conf Locate the start on line in the configuration file. It should look like this: start on (local-filesystems and static-network-up) Remove the static-network-up condition, so the line now looks like this: start on (local-filesystems) Save your changes by pressing Ctrl + O, then press Enter. To exit Nano, press Ctrl + X. Restart the Network Manager service to apply the changes: systemctl restart NetworkManager Network Manager should now start automatically at boot without the static-network-up condition. ","date":"07-04-2015","objectID":"/posts/development/how-to-make-network-manager-start-automatically/:2:0","tags":null,"title":"How to Make Network Manager Start Automatically","uri":"/posts/development/how-to-make-network-manager-start-automatically/#steps-to-remove-the-static-network-up-line"},{"categories":["Development"],"collections":null,"content":"Conclusion In this guide, we\u0026rsquo;ve shown you how to remove the static-network-up line from the Network Manager service configuration file to ensure that Network Manager starts automatically at boot. This can be especially useful if you\u0026rsquo;ve encountered issues with Network Manager not starting as expected on your Linux system. ","date":"07-04-2015","objectID":"/posts/development/how-to-make-network-manager-start-automatically/:3:0","tags":null,"title":"How to Make Network Manager Start Automatically","uri":"/posts/development/how-to-make-network-manager-start-automatically/#conclusion"},{"categories":["Development"],"collections":null,"content":"If you want to modify the /etc/init/failsafe.conf file to reduce the waiting time for the network to come up during system startup. Specifically, you want to remove the sleep 40 and sleep 50 commands from the network waiting section. Here\u0026rsquo;s how you can do it: Note: Modifying system configuration files can have unintended consequences and may impact the stability and functionality of your system. Please make sure you have a backup of the original file and proceed with caution. Open a terminal on your Linux system. To edit the /etc/init/failsafe.conf file, you can use a text editor like nano or vi. Here, we\u0026rsquo;ll use nano: sudo nano /etc/init/failsafe.conf Look for the section that waits for the network to come up. It might look something like this: # Wait for a network interface to come up. start on net-device-up IFACE!=lo task script # Add your network-related commands here, if any. # Remove or comment out the sleep commands you want to remove. # sleep 40 # sleep 50 end script As instructed, remove or comment out the sleep 40 and sleep 50 lines. To comment them out, simply add a # character at the beginning of each line like this: # Wait for a network interface to come up. start on net-device-up IFACE!=lo task script # Add your network-related commands here, if any. # sleep 40 # sleep 50 end script Save the file and exit the text editor. In nano, you can do this by pressing Ctrl + O, then Enter to save, and Ctrl + X to exit. After making these changes, you may need to restart your system or the relevant service for the changes to take effect. You can do this with: sudo systemctl restart networking Please be cautious when making changes to system configuration files, as improper modifications can lead to system instability. If you\u0026rsquo;re not confident in making these changes, it\u0026rsquo;s a good idea to consult with someone experienced or your system administrator. ","date":"07-04-2015","objectID":"/posts/development/long-waiting-up-for-60-second-network/:0:0","tags":null,"title":"Long Waiting up for 60 second network","uri":"/posts/development/long-waiting-up-for-60-second-network/#"},{"categories":["Development"],"collections":null,"content":"If you\u0026rsquo;re encountering a virtual network failure during a verbose boot, where you see detailed messages about the networking initialization process, you may need to take specific actions to resolve the issue. One possible solution is to remove or rename the /etc/init/networking.conf file. Here\u0026rsquo;s a step-by-step guide on how to do this: Note: Before proceeding, make sure you have administrative privileges on your system. ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:0:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Access the Command Line You\u0026rsquo;ll need to access the command line interface of your system to perform these actions. You can usually do this by opening a terminal or console window. If you\u0026rsquo;re already logged in, proceed to the next step. ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:1:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#step-1-access-the-command-line"},{"categories":["Development"],"collections":null,"content":"Step 2: Rename or Remove networking.conf You can choose to either rename or remove the networking.conf file. Renaming is a safer option because it allows you to revert the changes if needed. ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:2:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#step-2-rename-or-remove-networkingconf"},{"categories":["Development"],"collections":null,"content":"Option 1: Rename networking.conf To rename the networking.conf file, follow these steps: sudo mv /etc/init/networking.conf /etc/init/networking.conf.bak This command will rename the file to networking.conf.bak. Renaming the file effectively disables it without deleting it. If you encounter any issues after making this change, you can easily revert it by renaming the file back to its original name: sudo mv /etc/init/networking.conf.bak /etc/init/networking.conf ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:2:1","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#option-1-rename-networkingconf"},{"categories":["Development"],"collections":null,"content":"Option 2: Remove networking.conf If you prefer to remove the networking.conf file entirely, be cautious, as this is a more aggressive approach. Use the following command to remove the file: sudo rm /etc/init/networking.conf ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:2:2","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#option-2-remove-networkingconf"},{"categories":["Development"],"collections":null,"content":"Step 3: Reboot Your System After renaming or removing the networking.conf file, it\u0026rsquo;s a good idea to reboot your system to apply the changes: sudo reboot ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:3:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#step-3-reboot-your-system"},{"categories":["Development"],"collections":null,"content":"Step 4: Verify Network Functionality Once your system has rebooted, check whether the virtual network issue has been resolved. You can do this by attempting to connect to the network or by monitoring the system logs for any networking-related errors. ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:4:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#step-4-verify-network-functionality"},{"categories":["Development"],"collections":null,"content":"Conclusion Troubleshooting virtual network failures during verbose boot can be challenging, but by renaming or removing the networking.conf file, you may be able to resolve the issue and get your network up and running again. Remember that these steps should be taken with caution, and it\u0026rsquo;s essential to have a backup or a plan to revert changes if necessary. ","date":"07-04-2015","objectID":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/:5:0","tags":null,"title":"Troubleshooting Virtual Network Failure on Verbose Boot","uri":"/posts/development/troubleshooting-virtual-network-failure-on-verbose-boot/#conclusion"},{"categories":["Development"],"collections":null,"content":"You can enhance your system\u0026rsquo;s security by setting up an automated email notification whenever someone logs in via SSH. This can help you stay informed about unauthorized access to your system. Below is a guide on how to implement this feature. Note: This guide assumes you have administrative access to your system and are familiar with basic Linux commands. ","date":"06-04-2015","objectID":"/posts/development/automate-ssh-login-notification-via-email/:0:0","tags":null,"title":"Automate SSH Login Notification via Email","uri":"/posts/development/automate-ssh-login-notification-via-email/#"},{"categories":["Development"],"collections":null,"content":"1. Edit /etc/profile First, open the /etc/profile file in a text editor as the root user: sudo nano /etc/profile ","date":"06-04-2015","objectID":"/posts/development/automate-ssh-login-notification-via-email/:1:0","tags":null,"title":"Automate SSH Login Notification via Email","uri":"/posts/development/automate-ssh-login-notification-via-email/#1-edit-etcprofile"},{"categories":["Development"],"collections":null,"content":"2. Add the Following Script Copy and paste the following script to the end of the /etc/profile file. This script retrieves the IP address, hostname, and timestamp of the login session and sends an email notification using mailx. # Send email when someone logs in via SSH if [ -n \u0026#34;$SSH_CONNECTION\u0026#34; ]; then IP=\u0026#34;$(echo $SSH_CONNECTION | cut -d \u0026#34; \u0026#34; -f 1)\u0026#34; HOSTNAME=$(hostname) NOW=$(date +\u0026#34;%e %b %Y, %a %r\u0026#34;) echo \u0026#39;Someone from \u0026#39;$IP\u0026#39; logged into \u0026#39;$HOSTNAME\u0026#39; on \u0026#39;$NOW\u0026#39;.\u0026#39; | mailx -s \u0026#39;SSH Login Notification\u0026#39; -r \u0026#39;your_sender_email@example.com\u0026#39; \u0026#39;your_notification_email@example.com\u0026#39; fi Make sure to replace 'your_sender_email@example.com' with your actual sender email address and 'your_notification_email@example.com' with the email address where you want to receive notifications. ","date":"06-04-2015","objectID":"/posts/development/automate-ssh-login-notification-via-email/:2:0","tags":null,"title":"Automate SSH Login Notification via Email","uri":"/posts/development/automate-ssh-login-notification-via-email/#2-add-the-following-script"},{"categories":["Development"],"collections":null,"content":"3. Save and Exit Save the changes and exit the text editor. In Nano, press Ctrl + O to save the file, then Enter, and finally Ctrl + X to exit. ","date":"06-04-2015","objectID":"/posts/development/automate-ssh-login-notification-via-email/:3:0","tags":null,"title":"Automate SSH Login Notification via Email","uri":"/posts/development/automate-ssh-login-notification-via-email/#3-save-and-exit"},{"categories":["Development"],"collections":null,"content":"4. Test SSH Login Now, you can test the SSH login notification by connecting to your server via SSH. After successfully logging in, you should receive an email notification at the specified email address. Note: Ensure that your system is properly configured to send emails. You may need to install and configure a mail server or use a third-party SMTP server for this to work. By following these steps, you will receive automated email notifications whenever someone logs in via SSH, helping you monitor your system\u0026rsquo;s security more effectively. ","date":"06-04-2015","objectID":"/posts/development/automate-ssh-login-notification-via-email/:4:0","tags":null,"title":"Automate SSH Login Notification via Email","uri":"/posts/development/automate-ssh-login-notification-via-email/#4-test-ssh-login"},{"categories":["Development"],"collections":null,"content":"When you need to connect to a MySQL server from outside your local network, using an SSH tunnel can provide a secure and convenient method for access. In this guide, we will walk you through the steps to connect to a MySQL server via SSH tunnel on a Linux system. This method can be particularly useful if you are trying to access a remote MySQL server hosted by a service provider like Quintagroup. ","date":"06-04-2015","objectID":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/:0:0","tags":null,"title":"How to Connect to MySQL via SSH Tunnel in Linux","uri":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following: SSH login credentials: You should have your SSH login details, including the username and server address. You can typically find these details in your Quintagroup account. MySQL database access details: You\u0026rsquo;ll need the MySQL username, database name, and password. You can obtain these details from your MySQL access information in your Quintagroup account. ","date":"06-04-2015","objectID":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/:1:0","tags":null,"title":"How to Connect to MySQL via SSH Tunnel in Linux","uri":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Steps to Connect Open a Terminal Window: Begin by opening a terminal window on your Linux system. You\u0026rsquo;ll use this terminal to execute commands and create the SSH tunnel. Build the SSH Tunnel: Use the following command to establish an SSH tunnel. Replace login with your SSH username and server with your server\u0026rsquo;s address (e.g., login.quintagroup.com): ssh login@server -L 3306:127.0.0.1:3306 -N You will be prompted to enter your SSH password. Go ahead and enter it. Background the SSH Tunnel: To allow you to run other commands while keeping the SSH tunnel active, press Ctrl+Z to pause the tunnel process. You should see a message like: [1]+ Stopped ssh login@server -L 3306:127.0.0.1:3306 -NTo move this process to the background, type bg and press Enter: [1]+ ssh login@server -L 3306:127.0.0.1:3306 -N \u0026amp; Connect to MySQL: Now that the SSH tunnel is established, you can connect to the MySQL server as if it were running on your local system (localhost:3306). Use the following command, replacing user with your MySQL username and database with your database name: mysql -h 127.0.0.1 -p -u user database You will be prompted to enter the password for your MySQL database, which can also be found in your Quintagroup account. You\u0026rsquo;re In!: Congratulations, you are now connected to the remote MySQL server via the SSH tunnel. You can execute SQL commands and queries as needed. Closing the Connection: When you\u0026rsquo;re done, close the application you used to access the MySQL server. If you were using a command-line tool (as mentioned in step 4), you can exit it by typing Ctrl+D. Returning to the SSH Tunnel: To bring the SSH tunnel back to the foreground, type fg. If you need to stop the tunnel, you can do so by pressing Ctrl+C. By following these steps, you can securely connect to a remote MySQL server via an SSH tunnel on your Linux system. This method ensures your data remains encrypted during the transfer and is a reliable way to access your databases remotely. ","date":"06-04-2015","objectID":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/:2:0","tags":null,"title":"How to Connect to MySQL via SSH Tunnel in Linux","uri":"/posts/development/how-to-connect-to-mysql-via-ssh-tunnel-in-linux/#steps-to-connect"},{"categories":["Development"],"collections":null,"content":"By Frank Wiles Setting up a simple SSH tunnel can be incredibly useful, yet finding a straightforward guide can be surprisingly challenging. In this Quick-Tip, I\u0026rsquo;ll walk you through the process using OpenSSH on a Linux/Unix system. With SSH tunneling, you can securely route all your local traffic through a remote server where you have an account. One common use case for SSH tunneling is redirecting outbound email traffic to a personal server. This can help you avoid the hassle of changing SMTP servers, dealing with SMTP-AUTH, and other complications when you\u0026rsquo;re behind firewalls. Hotel firewalls, wireless access points, and various NATing devices you encounter while traveling don\u0026rsquo;t always cooperate. Here\u0026rsquo;s how to do it: ssh -f user@personal-server.com -L 2000:personal-server.com:25 -N Let\u0026rsquo;s break down this command: -f: This flag instructs SSH to go into the background just before executing the command. user@personal-server.com: Replace this with your username and the address of your personal server. -L 2000:personal-server.com:25: This part specifies the tunnel. It\u0026rsquo;s in the format -L local-port:host:remote-port. In this case, it forwards local port 2000 to port 25 on personal-server.com. And yes, it\u0026rsquo;s all encrypted! -N: This tells OpenSSH not to execute a command on the remote system. Now, you can configure your email client to use localhost:2000 as the SMTP server, and your email will be securely tunneled through your personal server. SSH tunneling isn\u0026rsquo;t just for email; it\u0026rsquo;s a versatile tool. You can also use it to bypass restrictive firewall rules. For instance, if you encounter a firewall that doesn\u0026rsquo;t allow outbound Jabber protocol traffic to talk.google.com, you can work around it with this command: ssh -f -L 3000:talk.google.com:5222 home -N Here\u0026rsquo;s what\u0026rsquo;s happening: -f: As before, this puts SSH in the background. -L 3000:talk.google.com:5222: This sets up the tunnel, redirecting traffic from local port 3000 to talk.google.com on port 5222. home: This is just an SSH alias for your server at home. Afterward, configure your Jabber client to use localhost as the server and port 3000, which you\u0026rsquo;ve just configured. Now, your Google Talk traffic will be encrypted and routed through your home server before reaching Google. SSH tunneling can be a lifesaver when dealing with network restrictions, ensuring your data remains secure while passing through various networks. Remember, with these techniques, you have the power to keep your online activities private and unrestricted. ","date":"06-04-2015","objectID":"/posts/development/ssh-tunneling-made-easy/:0:0","tags":null,"title":"SSH Tunneling Made Easy","uri":"/posts/development/ssh-tunneling-made-easy/#"},{"categories":["Development"],"collections":null,"content":"In the past, setting up true two-factor authentication (2FA) for SSH access has been a bit of a challenge. However, with the release of OpenSSH 6.2, full and proper support for 2FA is now available. This article explains how to set up SSH 2FA using Google Authenticator on Ubuntu, which greatly enhances the security of your SSH access. ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:0:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#"},{"categories":["Development"],"collections":null,"content":"Quick Start To get started, follow these steps: Install the Google Authenticator PAM module by running the following command: sudo apt-get install libpam-google-authenticator Each user who wants to use SSH with 2FA should run the following command: google-authenticatorThis command interactively helps users create the ~/.google_authenticator file, which contains a shared secret and emergency passcodes. It also provides a QR code for quick loading of the shared secret into a two-factor authentication app (e.g., Google Authenticator) on their mobile device. Edit the SSH server configuration file /etc/ssh/sshd_config with a text editor of your choice and make the following changes: ChallengeResponseAuthentication yes PasswordAuthentication no AuthenticationMethods publickey,keyboard-interactiveEnsure that the following settings (which are typically defaults on Ubuntu) are also configured correctly: UsePAM yes PubkeyAuthentication yes Reload the SSH service to apply the changes: sudo service ssh reload Edit the PAM configuration file for SSH /etc/pam.d/sshd and replace the line: @include common-authwith: auth required pam_google_authenticator.so ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:1:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#quick-start"},{"categories":["Development"],"collections":null,"content":"How It Works Traditionally, SSH only verified one method of authentication, such as a password or private key. Multiple methods were allowed, and success with any one method resulted in a successful authentication. SSH key authentication occurred outside the Pluggable Authentication Module (PAM), which made it challenging to use both key-based authentication and PAM for a second factor. With the new feature introduced in OpenSSH 6.2, the AuthenticationMethods directive can be used to specify two methods that are both required. This means you can now require both SSH key authentication and PAM, effectively making PAM serve as the second factor authentication method. ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:2:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#how-it-works"},{"categories":["Development"],"collections":null,"content":"Why It\u0026rsquo;s More Secure Enhancing SSH security with this method is advantageous for several reasons: Reduced Attack Surface: By using native SSH and PAM support, you minimize the need for third-party patches or hacks, reducing the attack surface. Security by Design: Both SSH and PAM were designed with security in mind from the beginning, making them trustworthy components for authentication. ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:3:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#why-its-more-secure"},{"categories":["Development"],"collections":null,"content":"Any Catches? One thing to note is that the Google Authenticator PAM module (libpam-google-authenticator) is in the \u0026ldquo;universe\u0026rdquo; repository, which means it\u0026rsquo;s community-supported for security updates. However, if this method gains popularity in Ubuntu, there\u0026rsquo;s potential for it to be included in the \u0026ldquo;main\u0026rdquo; repository, which would provide more robust support. ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:4:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#any-catches"},{"categories":["Development"],"collections":null,"content":"Variations This method is flexible and allows for various authentication setups: If you don\u0026rsquo;t modify /etc/pam.d/sshd, your system will require both a key and the user\u0026rsquo;s password for SSH authentication. To require all three: a key, the password, and the code from the second-factor device, leave the @include common-auth line in place in /etc/pam.d/sshd and insert auth required pam_google_authenticator.so before it. By following these steps, you can easily set up two-factor authentication for SSH on your Ubuntu system, significantly enhancing the security of your remote access. ","date":"06-04-2015","objectID":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/:5:0","tags":null,"title":"SSH Two Factor Authentication with Google Authenticator","uri":"/posts/development/ssh-two-factor-authentication-with-google-authenticator/#variations"},{"categories":["Software"],"collections":null,"content":"The PATH is an environment variable in Unix-like operating systems, including macOS, that specifies a list of directories where the system should look for executable files when a command is entered in the terminal. This allows users to run commands without specifying the full path to the executable every time. In macOS, the PATH configuration is managed through the following files: /etc/paths: This file contains a list of directories that are added to the system-wide PATH. Each directory is listed on a separate line. When the system starts up or a new user session is created, the contents of this file are read, and the listed directories are added to the PATH. /etc/path.d/*: This directory contains individual files, each of which corresponds to an additional directory to be added to the system-wide PATH. The filenames in this directory are not significant; only the content matters. Each file should contain a single directory path. ","date":"28-03-2015","objectID":"/posts/software/understanding-macos-bash-path-configuration/:0:0","tags":["mac"],"title":"Understanding macOS Bash PATH Configuration","uri":"/posts/software/understanding-macos-bash-path-configuration/#"},{"categories":["Software"],"collections":null,"content":"Modifying the PATH Configuration To modify the system-wide PATH configuration on macOS, follow these steps: Editing /etc/paths: You can edit the /etc/paths file directly using a text editor with administrative privileges (such as sudo). Add each directory you want to include on a separate line. Save the changes after editing. Adding custom files to /etc/path.d/: Create a new file in the /etc/path.d/ directory for each additional directory you want to add to the PATH. Use a text editor with administrative privileges to create these files. Simply add the directory path to the file and save it. After making changes to either /etc/paths or files in /etc/path.d/, the updated PATH configuration will take effect when a new terminal session is started or the system is rebooted. Existing terminal sessions won\u0026rsquo;t reflect the changes until they are restarted. ","date":"28-03-2015","objectID":"/posts/software/understanding-macos-bash-path-configuration/:1:0","tags":["mac"],"title":"Understanding macOS Bash PATH Configuration","uri":"/posts/software/understanding-macos-bash-path-configuration/#modifying-the-path-configuration"},{"categories":["Software"],"collections":null,"content":"Importance of PATH Configuration Properly configuring the PATH is important because it ensures that your system can locate and execute the commands and programs you use. Without the correct paths set up, the system won\u0026rsquo;t be able to find the executable files associated with the commands you enter in the terminal. By understanding and managing the PATH configuration, you can ensure that your macOS system functions smoothly and allows you to run commands from any directory without needing to specify the full file path. ","date":"28-03-2015","objectID":"/posts/software/understanding-macos-bash-path-configuration/:2:0","tags":["mac"],"title":"Understanding macOS Bash PATH Configuration","uri":"/posts/software/understanding-macos-bash-path-configuration/#importance-of-path-configuration"},{"categories":["Development"],"collections":null,"content":"In Bash, you can create aliases to simplify and automate common commands. The alias you want to create is for the php artisan command, which is often used in Laravel projects. Here are the steps to create a Bash alias for php artisan: Open Your Bash Configuration File: To create a system-wide alias that applies to all users, you should modify the /etc/bashrc or /etc/bash.bashrc file. You\u0026rsquo;ll typically need root or superuser privileges to edit these files. Use a text editor to open the file, for example: sudo nano /etc/bashrc Alternatively, you can use any text editor you prefer, such as vim, emacs, or gedit. Add the Alias: Inside the configuration file, add your alias definition. In this case, you want to create an alias called artisan for the php artisan command. Add the following line: alias artisan=\u0026#39;php artisan\u0026#39; Your /etc/bashrc file should now include this alias. Save and Exit: Save the changes you made to the configuration file and exit the text editor. Apply the Changes: To apply the changes immediately, you can either restart your terminal or run the following command: source /etc/bashrc This will reload the Bash configuration, and your artisan alias will be available for all users on the system. Now, any user on your system can use the artisan alias to run the php artisan command. For example: artisan migrate This will be equivalent to running: php artisan migrate Remember that modifying system-wide configuration files like /etc/bashrc should be done with caution and proper permissions. Ensure that the alias you define doesn\u0026rsquo;t conflict with existing commands or aliases. ","date":"28-03-2015","objectID":"/posts/development/creating-a-bash-alias-for-php-artisan/:0:0","tags":null,"title":"Creating A Bash Alias For Php Artisan","uri":"/posts/development/creating-a-bash-alias-for-php-artisan/#"},{"categories":["Development"],"collections":null,"content":"In this article, we will walk you through the steps to establish a free HTTPS connection between Cloudflare and Google App Engine (GAE) while making use of AJAX requests. We\u0026rsquo;ll also address the common issue of a redirect loop that can occur when enabling Cloudflare\u0026rsquo;s Full SSL. ","date":"28-03-2015","objectID":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/:0:0","tags":null,"title":"Setting up Free HTTPS Connection with Cloudflare and Google App Engine using AJAX","uri":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before you begin, make sure you have the following: A website hosted on Google App Engine. A Cloudflare account with your domain added. ","date":"28-03-2015","objectID":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/:1:0","tags":null,"title":"Setting up Free HTTPS Connection with Cloudflare and Google App Engine using AJAX","uri":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Step 1: Force HTTPS on Cloudflare Page Rules To ensure that your website enforces HTTPS, you should set up Page Rules in Cloudflare. Follow these steps: Log in to your Cloudflare account. Go to the Page Rules section. Create two rules to force HTTPS for your domain and its subdomains: example.com/* *.example.com/*Configure these rules to always use HTTPS. ","date":"28-03-2015","objectID":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/:2:0","tags":null,"title":"Setting up Free HTTPS Connection with Cloudflare and Google App Engine using AJAX","uri":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/#step-1-force-https-on-cloudflare-page-rules"},{"categories":["Development"],"collections":null,"content":"Step 2: Disable Force HTTPS in GAE By default, Google App Engine may have settings that force HTTPS. To prevent a redirect loop, disable the force HTTPS setting in your GAE web.xml file. Here\u0026rsquo;s how: Access your Google App Engine project. Locate the web.xml file in your project\u0026rsquo;s configuration. Within the web.xml file, look for any settings related to forcing HTTPS and disable them. This ensures that Cloudflare can fetch resources using both secure and unsecured HTTP. ","date":"28-03-2015","objectID":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/:3:0","tags":null,"title":"Setting up Free HTTPS Connection with Cloudflare and Google App Engine using AJAX","uri":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/#step-2-disable-force-https-in-gae"},{"categories":["Development"],"collections":null,"content":"Step 3: Ensure HTTPS for AJAX URLs To ensure that your AJAX requests work seamlessly over HTTPS, follow these steps: Make sure all your AJAX URLs use the HTTPS protocol. Ensure that your AJAX requests are directed to the direct *.appspot.com site. This is important because when using Cloudflare\u0026rsquo;s HTTPS, AJAX calls must be valid even when Cloudflare is serving content via HTTPS. By following these steps, you can establish a free HTTPS connection between Cloudflare and Google App Engine while ensuring that your AJAX requests function correctly over HTTPS. Remember to regularly check and update your configurations as needed to maintain a secure and smooth user experience on your website. ","date":"28-03-2015","objectID":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/:4:0","tags":null,"title":"Setting up Free HTTPS Connection with Cloudflare and Google App Engine using AJAX","uri":"/posts/development/setting-up-free-https-connection-with-cloudflare-and-google-app-engine-using-ajax/#step-3-ensure-https-for-ajax-urls"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"When working with Laravel 5 and attempting to run a database seed, you may encounter a \u0026ldquo;Seed Class not found\u0026rdquo; exception. This error typically occurs when Laravel is unable to locate the specified seed class. However, there is a straightforward solution to this problem. In this blog post, we will explore the steps to resolve the \u0026ldquo;Seed Class not found\u0026rdquo; exception in Laravel 5. ","date":"25-03-2015","objectID":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/:0:0","tags":["php","laravel"],"title":"Fixing Seed Class not found Exception in Laravel 5","uri":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/#"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 1: Run Composer Dump-Autoload The first step is to run the following command in your project\u0026rsquo;s root directory: composer dump-autoload The dump-autoload command regenerates the list of all classes that need to be included by the composer\u0026rsquo;s autoloader. By executing this command, you ensure that Laravel can find the required seed class. ","date":"25-03-2015","objectID":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/:1:0","tags":["php","laravel"],"title":"Fixing Seed Class not found Exception in Laravel 5","uri":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/#step-1-run-composer-dump-autoload"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 2: Verify Seed Class Namespace and File Location Ensure that your seed class is in the correct namespace and directory. In Laravel 5, seed classes are typically stored in the database/seeds directory. Make sure the namespace of your seed class matches the directory structure and is properly defined at the top of the class file. For example, if your seed class is named DatabaseSeeder, it should be defined as follows: namespace Database\\Seeds; use Illuminate\\Database\\Seeder; class DatabaseSeeder extends Seeder { // ... } ","date":"25-03-2015","objectID":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/:2:0","tags":["php","laravel"],"title":"Fixing Seed Class not found Exception in Laravel 5","uri":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/#step-2-verify-seed-class-namespace-and-file-location"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Step 3: Run the Database Seed Command After verifying the namespace and file location, run the following command to execute the database seed: php artisan db:seed The db:seed command triggers the execution of all the seeders registered within the DatabaseSeeder class. If you have multiple seed classes, make sure they are properly registered in the run method of the DatabaseSeeder class. ","date":"25-03-2015","objectID":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/:3:0","tags":["php","laravel"],"title":"Fixing Seed Class not found Exception in Laravel 5","uri":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/#step-3-run-the-database-seed-command"},{"categories":["Development","Troubleshooting"],"collections":null,"content":"Conclusion By following the steps outlined above, you should be able to fix the \u0026ldquo;Seed Class not found\u0026rdquo; exception in Laravel 5. Remember to run composer dump-autoload to regenerate the class autoloader, verify the namespace and file location of your seed class, and execute php artisan db:seed to run the seeders. With these actions, you should be able to seed your Laravel 5 database without encountering the class not found exception. ","date":"25-03-2015","objectID":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/:4:0","tags":["php","laravel"],"title":"Fixing Seed Class not found Exception in Laravel 5","uri":"/posts/development/fixing-seed-class-not-found-exception-in-laravel-5/#conclusion"},{"categories":["Software"],"collections":null,"content":"If you\u0026rsquo;re encountering issues with your Ativ Smart PC Pro not detecting USB devices when trying to boot from them, one potential solution is to disable the Fast BIOS setting in the BIOS Advanced Menu. This guide will walk you through the steps to resolve this issue. ","date":"03-09-2014","objectID":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/:0:0","tags":["bios"],"title":"Troubleshooting Ativ Smart PC Pro USB Detection Issue When Booting to Device","uri":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/#"},{"categories":["Software"],"collections":null,"content":"Background The Fast BIOS setting, also known as Fast Boot or Quick Boot, is designed to reduce the time it takes for your computer to boot into the operating system by skipping certain hardware checks and initialization processes. While this can speed up the boot process, it may also cause compatibility issues with certain USB devices when trying to boot from them. ","date":"03-09-2014","objectID":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/:1:0","tags":["bios"],"title":"Troubleshooting Ativ Smart PC Pro USB Detection Issue When Booting to Device","uri":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/#background"},{"categories":["Software"],"collections":null,"content":"Steps to Disable Fast BIOS Access BIOS/UEFI Settings: Turn off your Ativ Smart PC Pro. Turn it back on and immediately start pressing the appropriate key to access the BIOS or UEFI settings. The specific key may vary depending on your computer\u0026rsquo;s manufacturer but is often one of the following: F2, F12, ESC, DEL, or F10. You may need to consult your computer\u0026rsquo;s manual or look for on-screen prompts during startup to determine the correct key. Navigate to the Advanced Menu: Once you are in the BIOS or UEFI settings, use your keyboard\u0026rsquo;s arrow keys to navigate. Look for an \u0026ldquo;Advanced\u0026rdquo; or \u0026ldquo;Boot\u0026rdquo; menu. The exact location and naming may differ between BIOS versions. Locate the Fast BIOS (Fast Boot) Setting: Within the Advanced or Boot menu, search for an option related to Fast BIOS, Fast Boot, or Quick Boot. It may be labeled differently depending on your computer\u0026rsquo;s BIOS version. Disable Fast BIOS: Select the Fast BIOS setting and change it from \u0026ldquo;Enabled\u0026rdquo; to \u0026ldquo;Disabled\u0026rdquo; using the appropriate keyboard keys. Usually, you\u0026rsquo;ll need to press the \u0026ldquo;+\u0026rdquo; or \u0026ldquo;-\u0026rdquo; keys to toggle between options. Save and Exit: After disabling Fast BIOS, locate the option to save your changes and exit the BIOS/UEFI settings. This is often labeled as \u0026ldquo;Save and Exit\u0026rdquo; or \u0026ldquo;Exit and Save Changes.\u0026rdquo; Reboot Your Computer: Let your computer restart and try booting from the USB device again. It should now detect the USB device and allow you to proceed with your desired boot operation. ","date":"03-09-2014","objectID":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/:2:0","tags":["bios"],"title":"Troubleshooting Ativ Smart PC Pro USB Detection Issue When Booting to Device","uri":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/#steps-to-disable-fast-bios"},{"categories":["Software"],"collections":null,"content":"Conclusion Disabling the Fast BIOS setting in your Ativ Smart PC Pro\u0026rsquo;s BIOS/UEFI settings should resolve the issue of USB devices not being detected when attempting to boot from them. By doing so, you ensure that the necessary hardware checks are performed during boot, allowing your computer to recognize and boot from the USB device successfully. If you continue to encounter problems or if this solution does not work, consider checking for BIOS updates or consulting the manufacturer\u0026rsquo;s support resources for further assistance. ","date":"03-09-2014","objectID":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/:3:0","tags":["bios"],"title":"Troubleshooting Ativ Smart PC Pro USB Detection Issue When Booting to Device","uri":"/posts/software/troubleshooting-ativ-smart-pc-pro-usb-detection-issue-when-booting-to-device/#conclusion"},{"categories":["Development"],"collections":null,"content":"To build an Android Jar library, you can follow these steps: Create a build.xml file with the following content: \u0026lt;project name=\u0026#34;AndroidUtils\u0026#34; default=\u0026#34;dist\u0026#34; basedir=\u0026#34;.\u0026#34;\u0026gt; \u0026lt;description\u0026gt;Android Sample Library\u0026lt;/description\u0026gt; \u0026lt;!-- Setting global properties for this build --\u0026gt; \u0026lt;property name=\u0026#34;src\u0026#34; location=\u0026#34;src\u0026#34; /\u0026gt; \u0026lt;property name=\u0026#34;bin\u0026#34; location=\u0026#34;bin\u0026#34; /\u0026gt; \u0026lt;target name=\u0026#34;dist\u0026#34;\u0026gt; \u0026lt;jar destfile=\u0026#34;android-utilities-v1.jar\u0026#34; basedir=\u0026#34;bin/classes\u0026#34;\u0026gt; \u0026lt;!-- Use ** to include the directory recursively --\u0026gt; \u0026lt;include name=\u0026#34;android/**\u0026#34; /\u0026gt; \u0026lt;exclude name=\u0026#34;android/utilities/v1/R.class\u0026#34; /\u0026gt; \u0026lt;exclude name=\u0026#34;android/utilities/v1/R$*.class\u0026#34; /\u0026gt; \u0026lt;/jar\u0026gt; \u0026lt;/target\u0026gt; \u0026lt;/project\u0026gt; Save the build.xml file in the root directory of your Android library project. Install Apache Ant on your system if you haven\u0026rsquo;t already. Ant is a build tool that can compile and package your Java code into a Jar file. You can download Ant from the Apache Ant website. Open a command prompt or terminal and navigate to the root directory of your Android library project. Run the following command to build the Jar file using Ant: ant dist After the build process completes successfully, you will find the generated Jar file named android-utilities-v1.jar in the bin directory of your project. ","date":"29-11-2012","objectID":"/posts/development/how-to-build-an-android-jar-library/:0:0","tags":["android"],"title":"How to Build an Android Jar Library","uri":"/posts/development/how-to-build-an-android-jar-library/#"},{"categories":["Development"],"collections":null,"content":"Obfuscating an Android Jar If you want to obfuscate your Android Jar to protect your code and reduce its size, you can use ProGuard, which is a popular Java optimization and obfuscation tool. Follow these steps to obfuscate your Android Jar: Download ProGuard from the official ProGuard website. Extract the downloaded ProGuard package to a directory on your system. Open the ProGuard GUI by executing the appropriate command based on your operating system: Windows: Run proguardgui.bat. Linux/macOS: Run ./proguardgui.sh. In the ProGuard GUI, go to the \u0026ldquo;Input/Output\u0026rdquo; tab. Add your input Jar file (e.g., android-utilities-v1.jar) to the \u0026ldquo;Input\u0026rdquo; section. Specify the output Jar file (e.g., android-utilities-v1-obfuscated.jar) in the \u0026ldquo;Output\u0026rdquo; section. In the \u0026ldquo;Library\u0026rdquo; section, add android.jar as an input/output file. This is necessary to provide the necessary Android framework classes for the obfuscation process. Switch to the \u0026ldquo;Shrinking\u0026rdquo; tab. In the \u0026ldquo;Keep\u0026rdquo; section, uncheck \u0026ldquo;Application\u0026rdquo; and check \u0026ldquo;Library\u0026rdquo;. This configuration ensures that only the necessary code is retained and the rest can be obfuscated. Click on the \u0026ldquo;Process\u0026rdquo; button to start the obfuscation process. After the process completes, you will find the obfuscated Jar file (e.g., android-utilities-v1-obfuscated.jar) in the specified output location. That\u0026rsquo;s it! You have successfully built an Android Jar library and obfuscated it using ProGuard. ","date":"29-11-2012","objectID":"/posts/development/how-to-build-an-android-jar-library/:1:0","tags":["android"],"title":"How to Build an Android Jar Library","uri":"/posts/development/how-to-build-an-android-jar-library/#obfuscating-an-android-jar"},{"categories":["Development"],"collections":null,"content":"When merging Git branches, it is sometimes desirable to combine the changes from one branch into another without preserving the commit history of the merged branch. Additionally, conflicts may arise during the merge process that need to be resolved using the \u0026ldquo;theirs\u0026rdquo; strategy, which means accepting the changes from the branch being merged in. Here\u0026rsquo;s a step-by-step guide on how to perform such a merge: ","date":"29-11-2012","objectID":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/:0:0","tags":["git"],"title":"Merging Git Without History and Resolving Conflicts Using Theirs","uri":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/#"},{"categories":["Development"],"collections":null,"content":"Step 1: Perform the Merge with Squash To merge the changes from one branch into another without preserving the commit history, you can use the --squash option with the git merge command. The --squash option condenses all the commits from the merged branch into a single commit in the target branch. git merge --squash \u0026lt;Branch Name\u0026gt; Replace \u0026lt;Branch Name\u0026gt; with the name of the branch you want to merge into the current branch. ","date":"29-11-2012","objectID":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/:1:0","tags":["git"],"title":"Merging Git Without History and Resolving Conflicts Using Theirs","uri":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/#step-1-perform-the-merge-with-squash"},{"categories":["Development"],"collections":null,"content":"Step 2: Resolve Conflicts with \u0026ldquo;Theirs\u0026rdquo; During the merge process, conflicts may arise if the same lines of code were modified in both branches. To resolve conflicts using the \u0026ldquo;theirs\u0026rdquo; strategy, which means accepting the changes from the branch being merged in, you can use the -Xtheirs option with the git merge command. git merge --squash \u0026lt;Branch Name\u0026gt; -Xtheirs Adding the -Xtheirs option ensures that, in case of conflicts, Git automatically resolves them by accepting the changes from the branch being merged in. ","date":"29-11-2012","objectID":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/:2:0","tags":["git"],"title":"Merging Git Without History and Resolving Conflicts Using Theirs","uri":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/#step-2-resolve-conflicts-with-theirs"},{"categories":["Development"],"collections":null,"content":"Step 3: Remove Deleted Files If any files were deleted in the branch being merged in and you want to remove them from the current branch as well, you can use the git rm command. Replace {DELETED-FILE-NAME} with the name of the file that was deleted. git rm {DELETED-FILE-NAME} By removing the deleted files, you ensure that they do not appear in the final merged commit. By following the above steps, you can merge Git branches without preserving the commit history and resolve conflicts using the \u0026ldquo;theirs\u0026rdquo; strategy. Remember to replace \u0026lt;Branch Name\u0026gt; with the actual branch name and {DELETED-FILE-NAME} with the name of any deleted files, as applicable. Please note that squashing and discarding commit history should be done with caution, as it eliminates valuable information about the development process. ","date":"29-11-2012","objectID":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/:3:0","tags":["git"],"title":"Merging Git Without History and Resolving Conflicts Using Theirs","uri":"/posts/development/merging-git-without-history-and-resolving-conflicts-using-theirs/#step-3-remove-deleted-files"},{"categories":["Development"],"collections":null,"content":"When working with the Android emulator, you might need to access the localhost running on your host computer from within the emulator. The default IP address to reach the host computer from the emulator is 10.0.2.2. In this article, we\u0026rsquo;ll explore how to access localhost on the Android emulator using this IP address. ","date":"29-11-2012","objectID":"/posts/development/accessing-localhost-on-android-emulator/:0:0","tags":["android"],"title":"Accessing Localhost on Android Emulator","uri":"/posts/development/accessing-localhost-on-android-emulator/#"},{"categories":["Development"],"collections":null,"content":"Prerequisites Before proceeding, ensure that you have the following: Android Studio or the Android SDK installed on your development machine. An Android emulator running on your machine. A web server or any application running on localhost that you want to access from the emulator. ","date":"29-11-2012","objectID":"/posts/development/accessing-localhost-on-android-emulator/:1:0","tags":["android"],"title":"Accessing Localhost on Android Emulator","uri":"/posts/development/accessing-localhost-on-android-emulator/#prerequisites"},{"categories":["Development"],"collections":null,"content":"Steps to Access Localhost Determine the IP address of your host computer: The default IP address to access localhost on your host computer from the Android emulator is 10.0.2.2. However, to verify the IP address, you can open a terminal or command prompt on your host computer and run the following command: ipconfig (Windows)ifconfig (macOS/Linux)Look for the network adapter\u0026rsquo;s IP address and note it down. Start your Android emulator: Launch the Android emulator from Android Studio or using the command line. Wait for the emulator to fully load and become operational. Access localhost from the emulator: Within the emulator, open a web browser or any application that needs to access localhost. Instead of using localhost or 127.0.0.1, enter 10.0.2.2 in the address bar or relevant configuration field. For example, if your web server is running on port 8080, you can access it from the emulator by entering http://10.0.2.2:8080 in the browser\u0026rsquo;s address bar. Verify the connection: If everything is set up correctly, you should be able to access the content or services hosted on your localhost from the Android emulator. Ensure that the desired web page or application functions as expected. ","date":"29-11-2012","objectID":"/posts/development/accessing-localhost-on-android-emulator/:2:0","tags":["android"],"title":"Accessing Localhost on Android Emulator","uri":"/posts/development/accessing-localhost-on-android-emulator/#steps-to-access-localhost"},{"categories":["Development"],"collections":null,"content":"Conclusion By utilizing the default IP address 10.0.2.2, it is possible to access the localhost running on your host computer from within the Android emulator. This capability is helpful for testing and debugging purposes, allowing you to interact with web servers and other applications seamlessly. ","date":"29-11-2012","objectID":"/posts/development/accessing-localhost-on-android-emulator/:3:0","tags":["android"],"title":"Accessing Localhost on Android Emulator","uri":"/posts/development/accessing-localhost-on-android-emulator/#conclusion"},{"categories":["Development"],"collections":null,"content":"When developing Android applications, it is often necessary to work with databases and persistence data. Android provides a powerful database management system called SQLite, which allows developers to store and retrieve structured data efficiently. In addition, Android applications may also use XML files to store configuration data or other types of persistent information. To facilitate the development process and enable easy access to these databases and XML files, a plugin called CellObject SQLite \u0026amp; XML Reader is available for the Eclipse Integrated Development Environment (IDE). This plugin enhances the capabilities of Eclipse by providing features for reading and manipulating SQLite databases and XML files directly within the IDE. In this article, we will explore the features of the CellObject SQLite \u0026amp; XML Reader plugin and discuss how it can be used to interact with Android emulator databases and persistence data. We will also provide examples and demonstrations to help you understand the plugin\u0026rsquo;s functionality. ","date":"29-11-2012","objectID":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/:0:0","tags":["android","sql"],"title":"Android Emulator Database and Persistence Data: CellObject SQLite \u0026 XML Reader Plugin for Eclipse","uri":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/#"},{"categories":["Development"],"collections":null,"content":"Installing the CellObject SQLite \u0026amp; XML Reader Plugin Before we dive into the details of using the CellObject SQLite \u0026amp; XML Reader plugin, let\u0026rsquo;s first go through the installation process. Follow the steps below to install the plugin in your Eclipse IDE: Launch Eclipse and go to the Help menu. Select Eclipse Marketplace from the dropdown menu. In the search bar, type \u0026ldquo;CellObject SQLite \u0026amp; XML Reader\u0026rdquo; and press Enter. Locate the plugin in the search results and click the Go to Marketplace button. On the plugin\u0026rsquo;s marketplace page, click the Install button. Follow the on-screen instructions to complete the installation process. Restart Eclipse to apply the changes. Once the plugin is successfully installed, you can proceed to use its features for working with Android emulator databases and persistence data. ","date":"29-11-2012","objectID":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/:1:0","tags":["android","sql"],"title":"Android Emulator Database and Persistence Data: CellObject SQLite \u0026 XML Reader Plugin for Eclipse","uri":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/#installing-the-cellobject-sqlite--xml-reader-plugin"},{"categories":["Development"],"collections":null,"content":"Reading SQLite Databases with CellObject SQLite \u0026amp; XML Reader The CellObject SQLite \u0026amp; XML Reader plugin provides a user-friendly interface for browsing and querying SQLite databases within Eclipse. Follow the steps below to read an SQLite database using the plugin: Open Eclipse and navigate to the Database Explorer perspective. If the perspective is not visible, you can enable it by going to Window \u0026gt; Perspective \u0026gt; Open Perspective \u0026gt; Other and selecting Database Explorer. In the Database Connections view, right-click and select New \u0026gt; Connection Profile. Choose the appropriate database connection type (e.g., SQLite) and provide the necessary details, such as the database file path. Click Finish to create the connection profile. Expand the connection profile in the Database Connections view to reveal the connected database. Right-click on the database and select Connect to establish a connection. Once connected, you can browse the tables, execute SQL queries, and view the results within Eclipse. The CellObject SQLite \u0026amp; XML Reader plugin also offers additional features, such as exporting query results as CSV files, importing data into the database, and executing batch scripts. These features can greatly streamline the process of working with SQLite databases during Android application development. ","date":"29-11-2012","objectID":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/:2:0","tags":["android","sql"],"title":"Android Emulator Database and Persistence Data: CellObject SQLite \u0026 XML Reader Plugin for Eclipse","uri":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/#reading-sqlite-databases-with-cellobject-sqlite--xml-reader"},{"categories":["Development"],"collections":null,"content":"Working with XML Files using CellObject SQLite \u0026amp; XML Reader In addition to SQLite databases, the CellObject SQLite \u0026amp; XML Reader plugin allows developers to read and manipulate XML files within Eclipse. This can be particularly useful when dealing with persistent XML-based configurations or data. To read an XML file using the plugin, follow these steps: Open Eclipse and navigate to the XML Explorer perspective. If the perspective is not visible, you can enable it by going to Window \u0026gt; Perspective \u0026gt; Open Perspective \u0026gt; Other and selecting XML Explorer. In the XML Files view, right-click and select New \u0026gt; XML File ","date":"29-11-2012","objectID":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/:3:0","tags":["android","sql"],"title":"Android Emulator Database and Persistence Data: CellObject SQLite \u0026 XML Reader Plugin for Eclipse","uri":"/posts/development/android-emulator-database-and-persistence-data-cellobject-sqlite-xml-reader-plugin-for-eclipse/#working-with-xml-files-using-cellobject-sqlite--xml-reader"}]