Sunday, May 8, 2016

Be the master : Creating a sudo look-a-like command for Windows

It's been quite some time that I have written a blog for Windows or related technologies. Though most of my time these days is spent on Linux, Fedora to be specific, it's not that I have joined the Windows haters club. I love technology in all shapes and sizes, platform no bar.

But being a Linux person at heart now, whenever I pay a visit to my Windows 10 Pro installation, somethings just hit me hard in the face. There are commands in Linux that don't exist in Windows, and which I rely on so much in my day to day work that I find it hard to live without them. Of course, some of these like vim, emacs, nano etc can be installed and with the Cygwin environment plus talks of Ubuntu over Windows, we will have enough of Linux within Windows pretty soon. But still, Windows in itself is a quite powerful OS and many of the Unix like functionalities are hidden beneath its surface (that actually is the problem, hidden beneath, out of normal user's reach).

Anyways, one such important command that I miss is sudo. Sudo is the command used to temporarily elevate the authorization level of a normal user to administrator for running a command in Linux. An equivalent command exists in Windows (well, kind of) which can do the same task but has a bit cryptic syntax. That command is RunAs command, which allows an executable to run with different user credentials. The below syntax makes this command work almost like sudo:

runas /user:Administrator /savecred /env "%*"
This will run the command passed to it with administrative privileges and preserve the current users environment variables etc. It will also save the credentials, so the command will not ask for admin password again till the command window is closed. If you want to run the command with Administrator user's environment (i.e. like "sudo - " command), just remove the /env flag. The %* at the end passes all the parameters passed to sudo.bat to the runas command.

To make it easier to use, copy paste this command in notepad and save it as sudo.bat or sudo.cmd somewhere. Then using file explorer, move it to your %WINDIR% (usually, C:\Windows) directory. On Windows 10, there is one more additional step to do. The administrator account is disabled by default under Windows 10, and for successfully running this command, you need that enabled. In order to do that:
  • Go to Start>Run. Type lusrmgr.msc and press enter to run Local Users and Groups management console.
  • Go to Users. Right Click Administrator and click properties. Clear the checkbox "Account is disabled". Click OK to close the properties dialog.
  • Again right click Administrator > Set Password... and set a password for the user. Please use a strong password otherwise the security of your system can be compromised. This is a no-questions-asked kind of admin account so handle with care.

Post this, you can run sudo <command> from any cmd or powershell window to elevate the privileges of the command you are running.

Happy computing!

Wednesday, June 3, 2015

Solution: Distorted / Garbled Graphics in Fedora 22 with Pre-SandyBridge Intel Graphics Chips

Fedora 22 was released recently on 26th May 2015 and continuing the trend since version 20 (at least for me), it has been a rock solid release for the most part. It features Linux Kernel 4.0, Gnome 3.16, KDE Frameworks 5.3 with Plasma Workspaces, XFCE 4.12, Mate 1.10 etc along with all the usual developer goodies like latest versions of compilers/interpreters including GCC 5, Python 2 and 3, OpenJDK 8 etc. The default package manager has been changed from Yum to DNF in this release and is extremely fast at dependency resolution because of the underlying libsolv library from the openSUSE gang. I've been using this release since its Alpha came out on Virtual Box and on bare metal since RC1, my usual timings.

Some folks at, which by the way is a great community for Fedora users, enthusiasts, evangelists and developers alike, have reported having  garbled / distorted graphics on some of the Intel Integrated Graphics chips. On analyzing further, I noticed that these chips are mostly from the Intel 4 series of chipsets (for ex. the G41 chipset with GMA X4500 graphics) or in other words, chipsets from pre-Sandybridge era. Have a look at the screenshot below to understand what I am talking about:

Now this, is pretty bad and almost unusable and I could sense the agony of the users upgrading to or trying switching to Fedora 22. So I thought may be I can be of some help and researched a little on it, for the sake of the great Fedora community's growth and well being. It has done so much for me and I would be glad if I could give back anything at all.

My first suspect was the Intel driver itself, xorg-x11-drv-intel but the one strange fact was that the same version of driver and the same version of kernel were working just fine on Fedora 21 on these machines. So this issue is confined to Fedora 22 and it changes, that's for sure. I think it could be the result of one of the two changes introduced in Fedora 22:
  1. As part of the changes for Wayland and other stuff, the X server now runs in a root-less mode. This can have implication on applications that access the graphics stack at lower levels.
  2. If I am not wrong, GCC 5 compiler collection has support for C++14 which introduced ABI (Application Binary Interface) changes and many of the components including the Intel driver itself has been recompiled with GCC5 for maintaining binary compatibility. So even though the version of the driver is same, its not binary compatible with the driver present in Fedora 21.
 Anyways, this is something still to be discussed with the Fedora/Intel teams but for now there is a solution you can use. I am not sure if it can be used with Live CDs written on optical disks but I think it can be used on Live CDs written to USB drives and on installed systems.

Create a new text file called /etc/X11/xorg.conf.d/20-intel.conf with your favorite text editor (or edit it if it is already present) and enter below text in it:
Section "Device"
        Identifier  "card0"
        Driver       "intel"
        Option      "AccelMethod"  "uxa"
 If the section for intel card is already present, just add the Option   "AccelMethod"   "uxa" line in it. Save this file and reboot. Please note that since this file resides in /etc folder and not your home directory, you will need root / sudo access to create or edit this file.

Atleast 5+ users at Fedora Forum have confirmed that this "fix" has resolved their issue. What this essentially does is it asks the Intel driver to use the "UXA (Unified Acceleration Architecture)" for graphics acceleration instead of the "SNA (Sandybridge New Acceleration)" architecture. SNA is the new and preferred way of doing accelerated graphics on Intel chips and it's supposed to be backward compatible with older cards as well (and it was till F21), however, there is some regression introduced to it in Fedora 22 probably because of the mentioned changes.

Here is the link for the original forum thread for those who want to check out the whole discussion that took place :

So that's that, all's well and no harm done. Enjoy the awesome Fedora 22 release and do upgrade your hardware whenever you can :)


Saturday, December 27, 2014

Fedora 21 Release Party in Mumbai - A gathering of like minded FOSS evangelists

 Fedora Project

It was a fine Sunday afternoon on 21st December, when me along with some of the brightest minds from FOSS community of Mumbai joined together at Homi Bhabha Institute of Science Education to celebrate the release of Fedora 21 Operating System, one of the most respected and loved open source project for its true support and adherence to the FOSS philosophy.

Fedora 21 is the first release of the OS that is part of the Fedora.Next initiative, which fundamentally changes the way the OS is packaged and released. In the history of 11 years, Fedora has been shipped as a monolithic OS image catering to all of its users without focusing on any particular use-case scenarios. This got changed with Fedora 21. Now the distribution produces 3 distinct products based on its target audience. These are:
  1. Fedora Workstation: The product featuring the Gnome 3 environment focusing on the desktop and workstation users including developers, designers as well as casual computer users using computers for learning, work and fun.
  2. Fedora Server: Fedora Server focuses on, well, server use cases. This product basically has capabilities to setup any kind of server scenario be it a DNS server, File server, FTP server or Mail server. It doesn't feature a desktop environment by default, because who needs that on a server anyways? Cockpit and FreeIPA are two fabulous tools for server and identity management that I simply love, and are included by default. This product does a good job in reducing the attack surface (believe me, even on Linux we have such things :) ) by adhering to less-by-default philosophy and allowing admins to setup specific functionalities by choice in setup as well as afterwards.
  3. Fedora Cloud: My understanding of the cloud environments is limited, however, I can tell you that Fedora Cloud is an image of Fedora OS that ready to be deployed directly to many of the popular cloud service providers including Amazon, OpenStack etc and has support for Docker and Project Atomic Host. The kernel packages in Fedora 21 have been separated into core and modules, and since the cloud image never runs on metal, the modules part is not included making the image size ideally small.
  4. The Spins: Yes, Yes I said 3 products but I will give you one more anyway :) Although not officially publicized that much, Fedora project also features Spins, DVD images specifically tailored for various desktop environments other than Gnome. The list of spins includes the KDE spin, MATE spin, XFCE spin, LXDE Spin and Sugar Desktop Environment spin to name a few. So you are not locked to Gnome, go, have fun with whatever you like. It’s an open source OS, remember?

The Speakers

The event featured some brilliant speakers who gave powerful informative sessions throughout the day. The list includes:
  • Pravin Satpute: Pravin opened the event with a warm welcome and told us about the history of Fedora, how it all started at Redhat and the Fedora.Next initiative. It was a nice ride down the memory lane for us oldies and a nice primer about the "Fedora Way" for new comers.
  • Anish Patil: Anish's talk focused on the new features in Fedora Workstation product including the shiny new Gnome 3.14, new applications introduced in it, the powerful developer-centric features like dev assist and the awesome Gnome-Software application management tool. He also introduced new people to the Gnome UX and cleared some misconceptions or “Myths” that are common about Fedora. We were all impressed!
  • Praveen Kumar: One of my favorites, Praveen took the stage next and satisfied my thirst for information about cloud computing and how Fedora Cloud product fits in. He described the basic functioning of a cloud infrastructure as well as Project Atomic and OpenStack integration in Fedora. I was jumping with joy when he described the Docker-IO feature integration and Kubernetes and how it can help application programmers in deploying and delivering their solutions effectively. Awesome!
  • Rahul Bhalerao: Rahul is a really nice, soft spoken, mild guy, well suited for the next item in the agenda, How to contribute to Fedora Project. This topic was people centric and required him to motivate people enough to contribute to the project in their spare time for free. Well, they do get other people’s work in Fedora as a return so I will not say it’s free; it’s a giving-back-to-the-community program actually. Needless to say, Rahul's talk was very effective and audience was quite excited in the end. He also told the step by step method to get involved and the Dos and Don'ts of working on a community based project.

Post the talks, we had an open session where we poured our hearts out loud and had a healthy and power packed with knowledge discussion with all the attendees. The event was also graced by the presence of Professor Nagarjun G of Tata Institute of Fundamental Research, who is a well known figure in open source world and a contributor and the magic mind behind many great projects. Professor Nagarjun gave us a great deal of knowledge about the FOSS philosophy, the difference between FOSS and OSS and discussed in details the problems of monolithic, centralized projects owned by single companies and how distributed FOSS projects can help. It was a wonderful experience meeting this brilliant mind :)

Well, yes there were tasty snacks as well in between with tea sessions and everybody had fun I think at that, along with some college students who were not there in the sessions I guess :D Everybody was heartily welcome!

This awesome cake was cut to give this event a true celebration mood, and believe me guys it was TASTY! I would just say this, if you like FOSS, don't miss these events ever, even if you are in just for the cake. It's worth it!

Here we are, all happy and cheerful folks after the event! And why not, apart from knowledge and tasty food, we got DVDs of Workstation and Server, awesome stickers and some other goodies as well.


Wednesday, December 4, 2013

When it comes to MTP, Dolphin is a bitch! Konqueror to rescue

KDE is great desktop environment and I use it everyday since the time I got introduced to it in 2001. It's free, it's open source, its highly configurable and its really fun to use.  Since version 4.10, KDE has inbuilt support of the MTP protocol which most of the smart phones in the market are using nowadays for connecting to PC. Mass Storage has become a thing of the past with MTP offering easier access, in built support in Windows and Linux OS and transfer speeds comparable to mass storage mode.

KDE in Linux offers MTP support using its robust, feature rich and pluggable I/O layer, KIO. It uses a a kio-slave called kio-mtp which is now pretty standard in almost all the current distributions. kio-mtp is supposed to provide a complete and seamless integration of MTP protocol in KDE using whatever file management tool you use. Dolphin is the default file manager these days on all distributions and its a great piece of software for the most part.

Until it comes to MTP. Unfortunately, as per my experience, the MTP implementation in Dolphin is broken and has a multitude of issues. I use distributions based on Ubuntu mostly (Kubuntu, Linux Mint etc.) so this holds true for those only. I have no idea about distros like Fedora and OpenSUSE. Some of the issues that I have faced are:

  • Not able to connect at all.
  • Can connect but waiting forever to show the contents of the device.
  • While copying files getting stuck for a long time and then giving error 'MTP Protocol has died unexpectedly'.
  • and some more...
If you go searching, you will find a lot of posts with people whining about these issues and mostly suggestions given to them include using a different version of libmtp, using a different access driver altogether. But it seems, there is no luck if you want to use the plain-jane kio based integration.

Then today, I accidentally put my foot over Konqueror and  just out of curiosity I switched it to File Manager profile. I tried accessing my MTP devices and viola! everything worked as it should. I am able to copy to and from the device, both single files/folders as well as a bunch of them.

So I think the issue lies in the way Dolphin accesses MTP devices and not in the KIO layer itself. I have marked it as a solution for time being for myself. Just create shortcut for konqueror with command line:

konqueror ~

You can replace ~ with any other path. Use this shortcut whenever you want to access MTP devices. If you would like to set konqueror as your default file manager, you can do so from System Settings > Default Applications > File Manager and choosing Konqueror there in the list.

One last thing to keep in mind for Android device owners: The KIO based MTP implementation is a bit quirky about USB Debugging so turn that option off on your device when you are connecting for file management tasks.

Let me know in comments whether these worked out for you or not.


Sunday, November 24, 2013

Solution to Linux NTFS performance woes :- Bad performance / 100% CPU usage when using VirtualBox / VMWare and In general

Hey everybody!

On my dual boot system, apart from the system partition for Windows and Root+Swap partitions for Linux, I have one large partition of about 400 GB dedicated for storing my data which includes my shared Thunderbird email data folders, my documents, my videos, my music and what not. For obvious reasons, this partition is formatted with NTFS file system, for easy data sharing between Windows and Linux (BTW, If you really wanna know, I use Windows 8.1 Pro and Kubuntu 13.10, Kubuntu being my primary OS as of now :-) ). I was quite happy with my setup except for two very annoying problems (which use to happen in Linux Mint too that I was using previously) :

1. The Thunderbird installation use to lag too much on Linux, and sometimes on Windows too while starting up and reading multiple mails or accessing folders.

2. While using any Virtual Box or VMWare VM under Linux, I was getting pathetic performance and my host system use to hang a lot while reading or writing files inside the VMs. In system monitor i could see a lot of processor time going to the process : mount.ntfs. Also, in VirtualBox, whenever I tried merging a snapshot with the base image, the process use to hang and never complete.

High CPU usage by mount.ntfs

The Thunderbird problem was not that severe so my attention was solely on the VM problem and for some time I thought that this may be an issue with VMWare and Virtual Box. But even after upgrading to newest versions of both the software multiple times, this problem was never gone. And besides, on giving it a well deserved thought, I realized that the process mount.ntfs is not specific to Virtual Box or VMWare.

So basically, this seemed to me as an issue with the file system driver itself, namely NTFS-3G. I searched a lot on the net for a solution but didn't find any, there were only the same questions, that I was also asking. Frustrated, I decided to look into the official specifications and FAQ section of the driver developer's website ( and viola! I found the answers to all my issues with NTFS file system in Linux. Below are the exact steps using which you can also get great performance on NTFS drives under Linux:

1. Keep it Uncompressed! Period.

NTFS is a closed source file system and the NTFS-3G driver was created using some very sophisticated reverse-engineering techniques. Now, all the code revisions over the past few years have made it very speedy and bug free but still there are some grey areas where it cannot compete with the native driver as far as performance is concerned (and it is also not expected to. Remember, NTFS is not a preferred FS under Linux, it's there for compatibility with the Windows world).

Transparent Compression, is one such feature. Under Windows, you can compress a particular folder or even a whole drive using the native file system compression feature and it works great. The files are compressed and decompressed on the fly when you use them and you don't notice a thing. Performance optimizations have also been done by Microsoft to make it work seamlessly. But when working in Linux, all is not so hunky dory. While decompressing and compressing files, the NTFS-3G driver takes way too much CPU power and being a file system driver with more privileges, it hogs the system resources like a monster, uninterrupted for the most part. So the basic thing you can do to get about 10 times more performance is to decompress your drives that you share under both Windows and Linux. To do that, just right click the drive in Windows Explorer, select properties and uncheck the option  “Compress drive to save disk space.” and click Apply. In the next dialog that appears, choose “Apply changes to :\, subfolders and files” and click OK.

Remove drive NTFS compression under Windows

 If you have lots of files on the drive, this process can take some time so have some tea and snacks. This procedure will decompress the whole drive. If you don't want that, then at least decompress the performance critical folders on your drive like the ones where you have kept your VM virtual hdds (VDI, VMDK or VHD files). You can do that by right clocking on the folder, clicking Advance... button and unchecking the checkbox that says “Compress contents to save disk space”. This will improve the performance a lot and you will immediately notice the difference as soon as you will boot into Linux.

Remove NTFS folder compression under Windows

2. Enable Big Write Mode and Disable Last Access Timestamp Updation:

The NTFS-3G driver supports a flag called big_writes while configuring your file system in /etc/fstab or while mounting using mount command. What this essentially does is that it instructs the driver to write data to disk in larger chunks instead of on every single write instruction received by it. This helps a lot with throughput while writing/copying/moving large files and is in general good for small files too.

Similarly, NTFS has a feature of recording last access time of a file and this this is done every time the file is accessed, which adds up to the total time it takes to read from or write to the file. This can be safely turned off without causing any harm to the data.

To configure these options, below are the settings I use in my /etc/fstab file. You can use the same flags as in screenshot, other details will vary from system to system depending on how many drives you have and how you configure them.Basically, the highlighted items are the ones you wanna change in your config.

Configure big_writes and noatime mode in /etc/fstab

3. Disable mlocate/locate indexing of NTFS drives

mlocate or locate is a standard program under linux which can be used to search the file system quickly for a file or directory. It uses a high performance index of the file system,  generated and updated every day by the updatedb command. Usually, this is a scheduled activity by default on most systems.

The updatedb utility has some issues with NTFS file system, where even if a single file or folder is changed on the file system, it considers all the files and folders as changed and re-indexes everything on the drive. This obviously takes CPU resources and if the drive is compressed, the situation becomes more problematic because of the high CPU utilization by compression/decompression routines. This doesn't happen too much now-a-days it seems, probably due to updated versions of these two commands but still, changing a little configuration option for these commands can give you much better results.

The trick is to disable the index generation on NTFS file systems altogether. Usually, indexing is not required on NTFS and you can always go and search items using your GUI file managers if you need to. To disable it, edit the file /etc/updatedb.conf and add the entries ntfs and ntfs-3g to the "PRUNEFS=" line, like in the screenshot below. I am not sure whether ntfs-3g is needed or not but there is no harm in adding it so I add it nevertheless.

After applying all these tricks, my system has become so fast and responsive that I can finally use it without a hitch as my production machine for all purposes. Try these and let me know in the comments how it worked out for you.


Tuesday, May 21, 2013

Truth About Internal Memory in Samsung Android Devices

Hey Everybody!

On 15th this month, I got my shiny new Samsung Galaxy Note II, GT-N7100, the international version or to be precise, the Indian version, in the color of my choice, Titanium Grey. I moved on to it from a 32 GB Galaxy S3,  basically for a bigger screen, much better battery, and the mighty S-Pen :) Needless to say, I am on cloud nine since then :D

But as always, you don't get everything of everything and this phone is no exception. My biggest gripe is the 16 GB memory touted by Samsung. It is not 16 GB actually. You only get 10.45 GB out of the box, and the rest is taken by : 

  1. The calculation fiasco that virtually every company making storage devices on earth do, where they calculate 1 GB = 1 Billion Bytes instead of 1073741824 bytes, and 
  2. The memory eaten away by Android OS and pre-installed software, basically your phone ROM. 

In addition, when you first boot your device, the initialization process also creates a few sqlite databases and other files on this memory reducing it even further.

In newer Android devices, I think the Samsung has stopped making different physical partitions (or memory chips) for ROM and internal memory. Instead, what they do is they divide the same physical memory chip into 2 or more logical partitions and then mount the partition with "ROM" contents as read only. If you are aware of the process of flashing a Samsung phone with ODIN, you might have found the word PIT file used often. PIT file actually defines the partition layout. That's why it is said that while doing normal ROM flashing, the re-partition checkbox should not be checked, it can wreak havoc if not done properly.

This layout provides them the flexibility where if the size of ROM contents decreases or increases, they can just re-partition the memory to adjust for it. This was a problem in previous devices, like the Galaxy S, which only got the Value Pack instead of an upgrade to Android 4.0 ICS, as their isolated ROM memory chip was not having enough space. Last, i checked around 60-70 MB was free in /system on my Galaxy S i9003.

Samsung doesn't release 32/64 GB versions of their phones in many countries like India many times. For example, in Philippines, 64 GB version of Note 2 is readily available but In India, only 16 GB is there. That's one of the reason why people resort to Rooting. Rooting actually provides a way to go over this limitation by using a technique called Directory Binding, which I will talk about in my upcoming posts. But this is one thing that I don't like about you, Samsung! I hope you are listening!

Friday, May 3, 2013

Solution: Bluetooth not working after upgrading to Ubuntu 13.04 (Raring Ringtail)

If you are like me, there are good chances that you love Ubuntu as an OS and there are even better chances that you have already upgraded to the latest and the greatest flavor of it, version 13.04 code named Raring Ringtail. Upgrading an OS is not similar to Installing a fresh copy, well, we all computer veterans know that. 

Upgrading almost always brings with it its fair share of problem.One such problem i recently had was a non working Bluetooth. In Ubuntu 13.04, the Bluetooth applet has been changed a little and its more sophisticated looking now. But somehow, on some computers, when you are upgrading from 12.04 or 12.10, the Bluetooth stops working. Whenever I was trying to send any file to my Samsung Galaxy S3 smartphone, i was getting following error :

Error : GDBus.Error.openobex.Error.Failed: Unable to request session

I researched a lot and couldn't find any proper solution for it. In the process, I came across a Bugzilla entry : .But here also, the bug is only listed, no official solution has been posted as of now. But going through the user comments I found that few people have commented a solution, to use a software called BlueMan a.k.a Bluetooth Manager.It is available in official repositories so you can install it by simply issuing the following command:

sudo apt-get install blueman

I installed it and started it. Immediately, I saw another Bluetooth icon in my notification area which had somewhat similar options to the "official" icon. So I thought that was it and tried to send a file from this new icon's Send Files option. Turns out, it was able to pair with the phone but not able to send anything. It was repeatedly getting stuck and then reporting that there was an error sending the file without any much helpful message. BAM!

This was testing my patience now and I cursed the Ubuntu team a little :) Then out of the clear, blue sky, I got an idea and I tried the Send File option of original Bluetooth icon. It worked!! But when I exited Blueman, the situation reverted to previous state.

So basically the solution was to keep Blueman running and using the original Bluetooth icon. It was less than perfect as I now had two icons in my notification area, but it worked.

But we tech-guys do not stop until we get what we want, at least as far as computers are concerned. So I dig further and finally found an option to turn off the blueman icon while still keeping it running. In the Blueman icon's menu, there is an option to access its Plugins. Use that, and disable the tray icon plugin. That will get rid of the extra icon. However, if anytime you need to get the icon back, you will have to re-install the utility. I personally don't know how to re-enable it without that. If anybody has some idea, let me know in comments.

Hope this will help some poor soul. :)

Happy Computing!