Vital Signs Monitor

When I was a little girl, I practically lived in the hospital and my friends are usually a bunch of adults who called themselves cardiologist. I had chronic bronchitis as well as a loose valve in my left ventricle, and I had to be closely monitored all the time, especially when I contract a fever for the fear of infection in both my lungs and my heart.

It wasn’t exactly fun to practically grow up in the hospital. They pricked me with needles for blood samples and IV Drip more than they would feed me, and I watched Vital Signs monitors more than I watched the idiot box when I was a little girl.

Do you know what Vital sign monitor is? Well, it’s a machine that functions to measure and display heart rate, breathing rate, and blood pressure on a computer screen. Heart rate, breathing rate and blood pressure are all vital signs in a living human body. If these vital signs become abnormal, an alarm usually sounds. One of the most famous and well know vital signs monitor is Welch allyn vital sign monitor.

These machines are also used during those conscious sedation procedures… just to make sure that nothing goes wrong with the patient. I’ve always hated these machine. It makes me thinks of death. You know… when someone die or goes into a state of comatose, these vital signs machines are the first in line to alarm the medical officers in charge that you’re dead or if there’s something wrong with you because it’ll go ‘BEEP’ very loudly.

When I was forced to overnight in the hospital for monitoring, I always get terrified… terrified of the beeping sounds that will out from those machines, that is. I’ve heard it pretty often in the ward that houses cancer patients. The  beeping sound is really traumatizing. Well, at least to me it is, because it tells me that someone that I might know is gone, and I’ll never see them again.

Anyway, it is a good thing that I was a sickly child and faced near death experience more than anyone else. It made me appreciate life more, and it also helps me understand a lot of illness and be more emphatic towards patients and their families.

Cleffairy: If someone tells me if they’re sick and was thrown into the operation room, I can tell if they’re telling the truth or fibbing by just listening to their description about the procedures that has been performed on them. So… if you happen to be my friend or my student, don’t dream of bluffing me if you’re not sick, because I will know if you lie!

Continue Reading

Antitrust

Is it possible to gain understanding towards certain matter a few years late? I am not sure how knowledge works for people, but I think I am quite a slow learner. I need to understand how everything works before I absorb certain information into my brain. I guess my brain is pretty selective in storing infos.

I come to realize that as of late, I started to understand more about physics and it’s theories 8 years after I left school, when I’m no longer required to understand the theories. It is amazing how human brain works…it’s really a wonder. I failed physics for my SPM examination. Yes, FAILED. Because I could not understand the formula and how the theories ought to be applied in real life.

But amazingly…. 8 years later, I could grasps on the concept…one quite a few physics theories… such as Newton Law, Momentum, Inertia, Quantum Physics and many more. I come to wonder…if my brain is taking me into a different direction by allowing me to finally understand and work on the theories correctly? Is this what they call wisdom? If it’s not wisdom, then what exactly do you call this? The ability to grasps on certain knowledge and relate it to real life?

God, my father would really be proud of me if I told him I finally understand certain concept in physics and actually get the calculation right. Is it because now I exercise my brain more than before, it made my brain more active then when I was studying? Is that how our brain actually works? In order to really use your brain and gain understanding, you need to continuously exercise it and give it constant stimulation?

Anyway, ditch physics and my questions. What I want to talk about today is…understanding and coming to term with certain matters after a few years. You see, when I was a little girl… I read a lot of classics. Authors like Charles Dickens, Jane Austen and many more are on my regular read. No… I did not read the children version of their stories. I read the unabridged version instead, and more often than not, I struggled to understand the entire thing. I believe, my mind wasn’t sharpened enough and I lack experience to totally understand what the books are trying to deliver.

Same goes with movies. You see… my husband is a rather technical person… while I am the direct opposite. I am the creative person in the household.Like any other typical man, my husband tend to gravitate to watch movies that revolves around…technology and science. I don’t quite watch science fiction unless I could relate it in real life. And instead of being fascinated about certain things like other people do… I showed my interest and understanding through writing them down instead.

As of late, I realized that I am really quite a slow learner. I understood one movie in particular about 6 years late. The movie in question is ‘Antitrust’.

Antitrust is actually a movie targeted to people who believed in that human knowledge belongs to all and people who supports Open Source. In order for people to understand what this movie is really about and what message it’s trying to send across, one would have to understand Open Source concept, and Microsoft antitrust case that has been quite a hit in commercial court all over the world back then.

Initially, Antitrust is about Milo Hoffman. The story stars off with him working with his three friends at their new software development company known as Skullbocks. Things started to gets murky when  Milo Hoffman is contacted by CEO Gary Winston of NURV (Never Underestimate Radical Vision) for a very attractive programming position: a fat paycheck, an almost-unrestrained working environment. Milo accepted Winston’s offer and he and his girlfriend, Alice Poulson, move to NURV headquarters in Portland, Oregon.

Despite development of the flagship product (Synapse, a worldwide media distribution network) being well on schedule, Hoffman soon becomes suspicious of the excellent source code Winston personally provides to him, seemingly when needed most, while refusing to divulge the code’s origin.

After his best friend, Teddy Chin, is murdered, Hoffman discovers that NURV is stealing the code they need from programmers around the world — including Teddy Chin — and then killing them to cover their tracks. Hoffman learns that not only does NURV employ an extensive surveillance system to observe and steal code, the company has infiltrated the law and most of the mainstream media. Even his girlfriend is a spy, an ex-con hired by the company to manipulate him into doing their deeds.

While searching through a secret NURV database containing surveillance dossiers on employees, he finds that the company has information of a very personal nature about a friend and co-worker, Lisa Calighan. When he reveals to her that the company has this information, she agrees to help him expose NURV’s crimes to the world. Coordinating with one of Hoffman’s friends from his old startup, they plan to use a local cable access station to hijack Synapse and broadcast their charges against NURV to the world. However, Lisa Calighan turns out to be a double agent, foils Hoffman’s plan, and turns him over to Winston.

Hoffman had already confronted Poulson and convinced her to side with him against Winston and NURV. When it became clear that Hoffman had not succeeded, a backup plan is put into motion by Poulson, the fourth member of Skullbocks, and the incorruptible internal security firm hired by NURV. As Winston prepares to kill Hoffman, the second team successfully usurps one of NURV’s own work centers, “Building 21” and transmits the incriminating evidence as well as the Synapse code. Winston and his entourage are publicly arrested for their crimes. After parting ways with the redeemed Poulson, Hoffman rejoins Skullbocks.

Okay… after re-watching this show almost 7 years after I first watched it, it is no longer a nonsense movie that could put me to sleep. I could now understand what the movie is all about, after being an Open Source supporter myself. What the characters did in the story is also no longer gibberish to me, and much to my amusement, Antitrust actually have loads of message to tell, and honestly…this movie… is pro-Open Source and rather anti Microsoft.

It is amazing how Antitrust seems to send you to a paralell world of Internet and software development technology and the dirty tricks that comes with it. What amaze me is that the movie itself gives allusion that the antagonist in the movie is Bill Gates of Microsoft himself. I’m really surprised that they are not subjected to libel.

Anyway, I learned a lot from this movie too… albeit a few years late. This is what learned:

  • Knowledge is power, but it could also destroy and corrupt.
  • Never ever try to dominate the business world in a dirty way, or one day, it might backfire.
  • Behind a good software, there’s always good programmers, and one should not just stick at one brand in regards to technology. If there’s good and free software out there that could benefit you, consider using it instead of the expensive proprietary.
  • One should not be extreme… it doesn’t matter if you support Microsoft or Open Source, as long as you open your mind to everything, there’s limitless possibility.
  • Human knowledge belongs to the world and not just one person.
  • Always anticipate the move of your opponent. Life is like playing a game of chess. One must anticipate the opponent’s move, or one will loose.
  • Sometimes, even your loved ones can be your enemy, so beware… don’t trust anyone but yourself 100%.
  • Always have a backup plan.
  • Things are not always what they seems to be.
  • And last but not least… what seems to be harmless, cute and pretty childish on the outside, may hold many dark secrets and could possible be the cause of destruction inside. (Referring to the servers in NURV that contains CCTV surveillance of programmers all around the world… it was disguised as children’s PC in a daycare in NURV)

This movie… is something to WOW for. But only if you understood how Microsoft, Open Source and Programming world works in reality. Thumbs up for Antitrust. Here’s the trailer for Antitrust:

And if you’re wondering where you can watch Antitrust, the entire movie… You can watch it HERE

Cleffairy: The Internet should be public, open and accessible.

Continue Reading

All About Service Provider

Sometimes, men are just like Internet service provider. They provide a roof over your head, decent food on the table, occasional sex when they feel like it and they get you somewhere by driving you out and accompany you when you need a chauffeur. They also serve as a good distraction when your mind is about to blow up as a result of being too much around a bunch of misbehaved children who can’t seems to just listen to you when you make effort to talk to them.

Like many many Internet service providers, no matter how good they are, men are not faultless. They are not perfect, though pretty much of them would love to believe otherwise. Some men could not accept truth. They wanted to be perfect, and therefore, an excuse for them to find solace outside of marriage.

Despite of the good service, there’s always noise (read: a communication term) as well as interference that’s caused by many factors. These service disruptions are often  caused by decrease of revenues, political issues, hardware malfunction, failing business strategies, and many more.

More often than not, unavoidable problems like these will caused you to be disconnected, and when you called up their toll free number to complain, which they always claim to be at your service all the time, regardless day or night, you will be put on hold for many countless minutes that can go up to hours.

Talking to these…’operators’ can really irk you. They never fail to annoy you and frustrate you. And instead of getting your problems solved, more problems will arise, because they don’t actually solve your problems. They delegate it elsewhere… either that, or your complaints will be disregard or served as another occupational hazards.

It is intolerable, but what choice do you actually have when you’re placed in such predicament?

Not many, isn’t it? Not when you’re binded by a legal contract that could not be broken unless you’re willing to compensate and bear the consequences by terminating the contract.

There’s always…. damages to your pocket, and your way of life when you decide to terminate a contract. This is pretty common when you’re dealing with Internet Service Providers.

And most would be unwilling to terminate the contract because they either not willing to deal with the loss or not willing to go through the hassle of starting over with a new service provider. I have to say, that sometimes, comfort zone could really cost a fortune. It stops us from venturing and trying new things and new products, and indirectly, it stops us from advancing as well.

Would it be sufficient to say that even though we’re unsatisfied with a certain service provider, we tend to stick to the one that is monopoly-ing the market because we’re wary of such nonsense, and refuse to deal with the problems all over again with a new service provider?

Cleffairy: When your ISP fails you, or annoy you, you actually have the option of not going online for a while. After all it is just fools who builds their social life by just online, and online alone. There’s much more to life than just Facebook, blogs and Twitters. Only the blind, the dumb, and the deaf would put such value to it.

Continue Reading

Windows XP expiry date…

Apart from knowing that Windows is a shitty Operating System with awful security, this is another reason why I DON’T USE Windows.

Windows usually gets outdated pretty fast, and Microsoft seems to think that every single being in this whole wide world could afford a PC with high specs. And unlike Linux, you’re gonna have to pay for the goddamn license, whether you like it or not.

Well, personally,I wouldn’t mind paying for something good and have high security, but Windows is completely crap. This is just another daylight robbery, forcing people to upgrade the PC. Do take note that each new release of Windows, the requirement would be higher. More often than not, newer version of Windows is not compatible with old PCs. with lower specs.

Cleffairy: I support Open Source.

Continue Reading

How Linux Works

For those who know me and have been following my blog would have a clue by now that I am not using Windows. I’m using Ubuntu, a Linux based operating system. Most of you may wonder how Linux works.

I found an article that explains it. 😀 And I thought I’d share with you techie geeks out there, and non-techie geeks who are interested in this Operating system. Read on, please. 😀

The main problem you face when you’re attempting to lift the lid on what makes Linux tick is knowing where to start. It’s a complicated stack of software that’s been developed by thousands of people.

Following the boot sequence would be a reasonable approach, explaining what Grub actually does, before jumping into the initiation of a RAM disk and the loading of the kernel. But the problem with this is obvious. Mention Grub too early in any article and you’re likely to scare many readers away. We’d have the same problem explaining the kernel if we took a chronological approach.

Instead, we’ve opted for a top-down view, tackling each stratum of Linux technology from the desktop to the kernel as it appears to the average user. This way, you can descend from your desktop comfort zone into the underworld of Linux archaeology, where we’ll find plenty of relics from the bygone era of multi-user systems, dumb terminals, remote connections and geeks gone by.

This is one of the things that makes Linux so interesting: you can see exactly what has happened, why and when. This enables us to dissect the operating system in a way we couldn’t attempt with some alternatives, while at the same time, you learn something about why things work the way they do on the surface.

Level 1: Userspace

Before we delve into the Linux underworld, there’s one idea that’s important to understand. It’s a concept that links userspace, privileges and groups, and it governs how the whole Linux system works and how you, as a user, interact with it.

It’s based on the premise that a normal desktop user shouldn’t be able to make important system changes without proving that they have the correct administrator’s privileges to do so. This is why you’re asked for a password when you install new packages or open your distribution’s configuration panels, and it’s why a normal user can’t see the contents of the /root directory or make changes to specific files.

Your distribution will use either sudo or an administrator account to grant access to the system-wide configurable parts of your system. The former will work typically only for a single session or command, and is used as an ad-hoc solution for normal day-to-day use, much like the way both Windows 7 and OS X handle privileges.

Linux exposed - privilidges

USER CONTROL: Groups make it possible to enable and disable certain services on a per-user basis

With a full-blown system administrator’s account, on the other hand, it’s sometimes far too easy to stay logged in for too long (and thus more likely that you’ll make an irreversible mistake or change). But the reason for both methods is security.

Linux uses a system of users, groups and privilege to keep your system as secure as possible. The idea is that you can mess around with your own files as much as you like, but you can’t mess about with the integrity of the whole system without at least entering a password. It might seem slightly redundant on a system when you are the only user of your system, but as we’ll see with many other parts of Linux, this concept is a throwback to a time when the average system had many users and only a single administrator or two.

Linux is a variant of the Unix operating system, which has been one of the most common multi-user systems for decades. This means that multi-user functionality is difficult to avoid in Linux, but it’s also one of the reasons why Linux is so popular – multi-user systems have to be secure, and Linux has inherited many of the advantages of these early systems.

A user account on Linux is still self-contained, for example. All of your personal files are held within your own home directory, and it’s the same for other users of the system. You can usually see their names by looking at the contents of /home with your file manager, and depending on their permissions, even look inside other people’s home folders.

But who can and can’t read their contents is governed by the user who owns the files, and that’s down to permissions.

Permissions

Every file and directory on the Linux filesystem has nine attributes that are used to define how they can be accessed. These attributes correspond to whether a user, a group or anyone can read, write and execute the file.

You might want to share a collection of photos with other users of your system, for example, and if you create a group called ‘photos’, add all the users who you’d like access to the group and set the group permissions for the photos folder, you’ll be able to limit who has access to your images.

Any modern file manager will be able to perform this task, usually by selecting a file and choosing its properties to change its permissions. This is also how your desktop will store configuration information for your applications, tools and utilities.

Hidden directories (those that start with a full stop), are often created within your home directory, and within these you’ll find text files that your desktop and applications will use to store your setup.

No one else can see them, and it’s one of the reasons why porting your current home directory to a new distribution can be such a good idea – you’ll keep all your settings, despite the entire operating system changing.

If you come to Linux from Windows or OS X rather than through the server room, the idea that there’s something called a desktop is quite a strange one. It’s like trying to explain that Microsoft Windows is an operating system to someone who just thinks it’s ‘the computer’.

The desktop is really just a special kind of application that has been designed to aid communication between the user and any other applications you may run.

This communication part is important, because the desktop always needs to know what’s happening and where. It’s only then it can do clever things like offer virtual desktops, minimise applications, or divide windows into different activities.

There are two ways that a desktop helps this to happen. The first is through something called its API, which is the Application Programming Interface. When a programmer developers an application using a desktop’s API, they’re able to take advantage of lots of things the desktop offers. It could be spell checking, for example, or it could be the list of contacts you keep in another app that uses the same API.

Linux exposed - moblin

MOBLIN: Moblin and UNR make good use of the Clutter framework to offer accelerated and smooth scrolling graphics

When lots of applications use the same API, it creates a much more homogeneous and refined experience, and that’s exactly what we’ve come to expect of both Gnome and KDE desktops.

The reason why K3b works so well with your music files is because it’s using the same KDE API that your music player uses, and it’s the same with many Gnome apps too.

Toolkits

But applications designed for a specific desktop environment don’t have to use any one API exclusively. There are probably more APIs than there are Linux distributions, and they can do anything from complex mathematics to hardware interfacing.

This is where you’ll hear terms like Clutter and Cairo bandied around, as these are additional toolkits that can help a programmer build more unified-looking applications.

Clutter, for example, is used by both Ubuntu Netbook Remix and Moblin to create hardware-accelerated, smoothly animated GUIs for low-power devices.

It’s Clutter that scrolls the top bar down in Moblin, for instance, and provides the fade-in effects of the launch menu in UNR. Cairo helps programmers create vector graphics easily, and is the default rendering engine in GTK, the toolkit behind Gnome, for many of its icons.

Rather than locking an image to a specific resolution, vector-based images can be infinitely scaled, making them perfect for images that are going to be used in a variety of resolutions. Inter-process communication The second way the desktop helps is by using something called ‘inter-process communication’.

As you might expect from its name, this helps one process talk to another, which in the case of a desktop, is usually one application talking to another. This is important because it helps a desktop feel cohesive: your music player might want to know when an MP3 player has been connected, for example, or your wireless networking software may want to use the system-wide notification system to let you know its found an open network.

In general terms, inter-process communication is the reason why GTK apps perform better on the Gnome desktop, and KDE apps work well with KDE, but the great thing about both desktops is that they use the same compatible method for inter-process communication – a system called D-BUS.

So why do Gnome and KDE feel so different to each another? Well, it’s because they use different window managers.

The idea of a window manager stretches right back to the time when Unix systems first crawled out of the primordial soup of the command line, and started to display a terminal within a window. You could drag this single window across the cross-hatched background, and open other terminals that you could also manipulate thanks to something called TWM, an acronym that reputedly stood for Tom’s Window Manager.

It didn’t do much, but it did free the user from pages of text. You could move windows freely around the display, resize them, maximize them and let them overlap one another. And this is exactly what Gnome and KDE’s window managers are still doing today.

KDE’s window manager, dubbed KWin, augments the moving and management components of TWM with some advanced features, such as its new-found abilities to embed any window within a tabbed border, snap applications to an area of the screen or move specific applications to preset virtual activities on their own desktops.

KWin also recreates plenty of compositing effects, such as window wobble, drop shadows and reflections, an idea pioneered by Compiz. This is yet another window manager, but rather than adding functionality, it was created specifically to add eye-candy to the previously static world of window management.

Compiz is still the default replacement for Gnome’s window manager (Metacity), and you can get it on your Gnome machine if you enable the advanced effects in the Visual Effects panel. You’ll find that it seamlessly replaces the default drawing routines with hardware-accelerated compositing.

Dependencies

One of biggest hurdles for people when they switch to Linux is the idea that you can’t simply download an executable from the internet and expect it to run.

When a new version of Firefox is released, for example, you can’t just grab a file from www.mozilla.org, save it to your desktop and double-click on the file to install the new version. A few distributions are getting close to this ideal, but that’s the problem.

It’s distribution-dependent, and we’re no closer to a single solution for application installation than we were 10 years ago. The problem is down to dependencies and the different ways distributions try to tame them.

A dependency is simply a package that an application needs if it’s to work properly. These are normally the APIs that the developers have used to help them develop the application, and they need to be included because the application uses parts of its functionality.

When they’re bundled in this way they’re known as libraries, because an app will borrow one or two components from a library to add to its own functionality.

Clutter is a dependency for both Moblin and UNR, for instance, and it would need to be installed for both desktops to work. And while Firefox may seem relatively self-contained on the surface, it has a considerable list of dependencies, including Cairo, a selection of TrueType fonts and even an audio engine.

Other operating systems solve this problem by statically linking applications to the resources they require. This means that they bundle everything that an app needs in one file.

All dependencies are hidden within the setup.msi file on Windows, for example, or the DMG file on OS X, giving the application or utility everything it needs to be able to run without any further additions.

The main disadvantage with this approach is that you’ll typically end up with several different versions of the same library on your system. This takes up more space, and if a security flaw is found, you’ll have to update all the applications rather than just the single library.

Xis a stupid name for the system responsible for drawing the windows on your screen, and for managing your mouse and keyboard, but that’s the name we’re stuck with. As with the glut of programming languages called B, C, C++ and C#, X got its name because its the successor to a windowing system called W, which at least makes a little more sense.

X has been one of the most important components in the Linux operating system almost from its inception. It’s often criticised for its complexity and size, but there can’t be many pieces of software that have lasted almost 20 years, especially when graphics and GUIs have changed so much.

But there’s something even more confusing about X than its name, and that its use of the terms ‘client’ and ‘server’. This relationship hails back to a time before Linux, when X was developed to work on dumb, cheap screens and keyboards connected to a powerful Unix mainframe system.

Linux exposed - xterm

XTERM: The original XTerm is still the default failsafe terminal for many distributions, including Ubuntu

The mainframe would do all the hard work, calculating the contents of windows and the shape of the GUI, while all the screen had to do was handle the interaction and display the data. To ensure that this connectivity wasn’t tied to any single vendor, an open protocol was created to shuffle the data between the various devices, and the result was X.

Client–server confusion

What is counter-intuitive is that the server in this equation is the terminal – the bit with the screen and keyboard. The client is the machine with all the CPU horsepower.

Normally, in client–server environments, it’s the other way around, with the more powerful machine being called the server. X swaps this around because it’s the terminal that serves resources to the user, while the applications use these resources as clients.

Now that both the client and the server run on the same machine, these complications aren’t an issue. Configuration is almost automatic these days, but you can still exploit X’s client–server architecture. It’s the reason why you can have more than one graphical session on one machine, for example, and why Linux is so good for remote desktops.

The system that handles authentication when you log into your system is called PAM (Pluggable Authentication Modules), which, as its name suggests, is able to implement many different types of security systems through the use of modules.

Authentication, in this sense, is a way of securing your login details and making sure they match those in your configuration files without the data being snooped or copied in the process. If a PAM module fails the authentication process, then it can’t be trusted.

Installed modules can be found in the /etc/ pam.d/directory on most distributions. If you use Gnome, there’s one to authenticate your login at the Gdm screen, as well as enabling the auto-login feature. There are common modules for handling the standard login prompt for the command line, as well as popular commands like passwd, cvs and sudo.

Each will use Pam to make sure you are who you say you are, and because it’s pluggable, the authentication modules don’t always have to be password-based. There are modules you can configure to use biometric information, like a fingerprint, or an encrypted key held on a USB thumb drive.

The great thing about PAM is that these methods are disconnected from whatever it is you’re authenticating, which means you can freely configure your system to mix and match.

Command-line shells

The thing that controls the inner workings of your computer is known as a shell, and shells can be either graphical or text based.

Before graphical displays were used to provide interactive environments to people over a network, text-based displays were the norm, and this layer is still a vitally important part of Linux. They hide beneath your GUI, and often protrude through the GUI level when you need to accomplish a specific task that no GUI design has yet been able to contain.

There are many graphical applications that can open a window on the world of the command line, with Gnome’s Terminal and KDE’s Konsole being two of the most common. But the best thing about the shell is that you don’t need a GUI at all.

You may have seen what are known as virtual consoles, for example. These are the login prompts that appear when you hold the Alt key and press F1–F6. If you log in with your username and password through one of these, you’ll find a fully functional terminal, which can be particularly handy if your X session crashed and you need to restart it.

Consoles like these are still used by many system administrators and normal desktop users today. It takes less bandwidth to send text information over a network and it’s easier to reconstruct than its graphical counterpart, which makes it ideal for remote administration.

This also means that the command line interface is more capable than a graphical environment, if you can cope with the learning curve.

By default, if you don’t install the X Window System, most distributions will fall back to what’s known as the Bourne Again Shell – Bash for short.

Linux exposed - terminals

THE TERMINAL: Most Linux installations offer more than one way of accessing a terminal, and more than one terminal

Bash is the command line that most of us use, and it enables you to execute scripts and applications from anywhere on your system. If you don’t mind the terse user interface of text-based systems like this, you can accomplish almost anything with the command line.

There are many different shells, and each is tailored for a specific type of user. You might want a programming-like interface (C-Shell), for example, or a super-powerful do-everything shell (Z Shell), but they all offer the same basic functionality, and to get the best out of them, you need to understand something about the Linux filesystem.

We’re moving into the lower levels of the Linux operating system, leaving behind the realm of user interaction, GUIs, command lines and relative simplicity.

The best way of explaining what goes on at this level is to go through the booting process up to the point where you can choose either a graphical session or work with the command line, and the first thing you see when you turn your machine on.

The init process is used by many distributions, including Debian and Fedora, to launch everything your operating system needs to function from the moment it leaves the safety of Grub. It’s got a long history – the version used by Linux is often written as sysvinit, which shows its Unix System V heritage.

Everything from Samba to SSH will need to be started at some point, and init does this by trawling through a script for each process in a specific order, which is defined by a number at the beginning of the script’s name. Which scripts are executed is dependent on something called the runlevel of your system, and this is different from one distribution to another, and especially between distros based on Fedora and Debian.

Linux exposed - gufw

GUFW: You don’t have to mess around with Iptables manually if you don’t want to. There are many GUIs, like GUFW, that make the job much easier to manage

You can see this in action by using the init command to switch runlevels manually. On Debian-based systems, type init 1 for single-user mode, and init 5 for a full graphical environment. Older versions of Fedora, on the other hand, offer a non-networking console login at runlevel 2, network functionality at level 3, and a full blown GUI at level 5, and each process will be run in turn as your system boots. This can create a bottleneck, especially when one process is waiting for network services to be enabled.

Each script needs to wait for the previous to complete before it can run, regardless of how many other system resources are being under-utilised.

If you think the init system seems fairly antiquated, you’re not alone. Many people feel the same way, and several distributions are considering a switch from init to an alternative called upstart. Most notably, the distribution that currently sponsors its development, Ubuntu, now uses upstart as its default booting daemon, as does Fedora, and the Debian maintainers have announced their intention to switch for the next release of their distribution.

Upstart’s great advantage is that it can run scripts asynchronously. This means that when one is waiting for a network connection to appear, another can be configuring hardware or initiating X. It will even use the same scripts as init, making the boot process quicker and more efficient, which is one of the main reasons why the latest versions of Ubuntu and Fedora boot so quickly in comparison with their older counterparts.

The kernel

We’ve now covered almost everything, with one large exception, the kernel itself. As we’ve already discussed, the kernel is responsible for managing and maintaining all system resources. It’s at the heart of a running Linux system, and it’s what makes Linux, Linux.

The kernel handles the filesystem, manages processes and loads drivers, implements networking, userspaces, memory and storage. And surprisingly, for the normal user, there isn’t that much to see.

Other than the elements displayed through the /proc and /sys filesystems, and the various processes that happen to be running in the background, most of these management systems are transparent. But there are some elements that are visible, and the most notable of these is the driver framework used to control your hardware.

Most distributions choose to package drivers as modules rather than as part of the monolithic kernel, and this means they can be loaded and unloaded as and when you need them. Which kernel modules are included and which aren’t is dependent on your distribution. But if you’ve installed the kernel source code, you can usually build your own modules without too much difficulty, or install them through your distribution’s package manager.

To see what modules are running type lsmod as a system administrator to list all the modules currently plugged into the kernel. Next to each module you’ll see listed any dependencies. Like the software variety, these are a requirement for the module to work correctly.

Modules are kernel-specific, which is why your Nvidia driver might sometimes break if your distribution automatically updates the kernel. Nvidia’s GLX module needs to be built against the current version of the kernel, which is what it attempts to do when you run the installer.

Fortunately, you can install more than one version of a module, and each will be automatically detected when you choose a new kernel from the Grub menu. This is because all the various modules are hidden within the /lib/modules directory, which itself should contain further directories named after kernel versions.

You can find which version of the kernel you’re running by typing uname -a. Depending on your distribution, you can find many kernel driver modules in the /lib/modules/kernel_name/kernel/drivers directory, and this is sometimes useful if your hardware hasn’t been detected properly.

If you know exactly which module your hardware should use, for example, you can load it with the modprobe module name. You may find that your hardware works without any further configuration, but it might also be wise to check your system logs to make sure your hardware is being used as expected.

You can remove modules from memory with the rmmod command, which is useful if Nvidia’s driver installer complains that a driver is already running.

Iptables

One of the more unusual modules you’ve find listed with lsmod is ip_tables. This is part of one of the most powerful aspects to Linux – its online security.

Iptables is the system used by the kernel to implement the Linux firewall. It can govern all packets coming into and out of your system using a complex series of rules. You can change the configuration in real time using the iptables command, but unless you’re an expert, this can be difficult to understand, especially when your computer’s security is at risk.

This is a reflection of the complexity within the networking stack, rather than Iptables itself, and is a necessary side effect of trying to handle several different layers of network data at the same time. But if you’re used to other systems and you want to configure Iptables manually, we’d recommend a GUI application like Firestarter, or Ubuntu’s ufw, which was developed specifically to make Iptables easier to use.

When it’s installed, you can quickly enable the firewall by typing ufw enable as root, for instance. You can allow or block specific ports with the ufw allow and ufw deny commands, or substitute the port with the name of the service you want to block.

You can find a list of service names for the system in the /etc/services file, and if you’re really stuck, you can install an even more user-friendly front-end to Iptables by installing the gufw package.

Cleffairy: I hate Windows. Windows sucks and have an expiry date. Worsts till, it’s a bloody virus magnet. Security-wise, for me…Linux based OS is a better option.

Continue Reading

Uh oh! I Seek You to Pidgin…



Uh oh! Uh oh! Uh oh!Uh oh! Uh oh!

Now, don’t go and let your imagination runs wild. That’s not my voice expressing my pleasure behind the closed door. But it’s the sound of ICQ. 😛

ICQ, pronounced as ‘I Seek You’. ICQ used to be very famous 10 years back. It’s the first IM that I used back then in the year 2000. I was introduced to the world of chatrooms back then in 2000 by my daddy, and he helped me installed my first IM ever in my PC, and I remembered how excited I got to be able to chat online with my friends from school.

Actually, he didn’t ‘introduced’ it to me, but I coerced him to install it for me cuz I thought it was fun. Hey… he can playfully flirt with ladies online behind my mummy’s back, why can’t I do the same with the boys in school using some fancy and cute nicknames?  Well, to cut the long story short, daddy dearest finally relented and sulkily installed ICQ for me in my PC so that I’d shut my mouth in front of my mum.

I remember ICQ. It was an in thing back then before MSN Messenger , Yahoo Messenger, AIM, MIRC and etcetera started to take over and dominate the world of Instant Messenger.

ICQ is a popular instant messaging computer program, which was first developed by  an Israeli company known as Mirabilis which is now taken over by AOL.

The first version of the program was released in November 1996 if my memories serves me right, and ICQ became one of the first Internet-wide instant messaging services.

I used to chat on ICQ with my friends from school and people from the net, but as time goes by ICQ somewhat faded into the background, and people started to use other IM like MSN Messenger and whatnot to communicate with each other real time on the net.

ICQ was the very first IM that I used. Then MSN messenger started to make it’s grand entrance and I got attracted to it’s cute emoticon and interface. When I started to use MSN Messenger, I found that it is more convenient and user friendly, and much more quiet… there was no sexy ‘Uh oh’ sound when someone sends me a message and therefore, I got stuck to using it for a couple of years until other messengers came along.

As time goes by I no longer use MSN Messenger full time. Partly because my friends were scattered everywhere on the net, using different kind of IM. Some of them are using other messengers, and I found it’s quite confusing to login into various type of IMs.

I don’t quite like to login into various messenger simultaneously and let them run just to wait for messages from my online friends, cuz for what it’s worth, those messengers really consume RAM, and my PC memories were never enough.

I like speed when I’m using my PC, so when I login into too many messengers at once and the PC lag, I kinda got fed up with it, and stopped chatting on most messengers including MSN messenger for a couple of years, until I discovered that there are universal chat platform that allows us to login in all of IM account at once.

One of the famous multi-platform IM client is Pidgin. Like other chat client, it is free. If you are interested, you can check it out. Just click the Pidgin to find out more about it.

Pidgin is an easy to use and free chat client used by millions.  It allows the users to connect to AIM, MSN, Yahoo, and more chat networks all at once. Pidgin supports multiple operating systems, including Windows as well as many Unix-like systems such as Linux, BSD, as well as Mac OS X.

Great, convenient IM, and Pidgin supported the chat networks below:

* AIM

* Bonjour

* Gadu-Gadu

* Google Talk

* Groupwise

* ICQ

* IRC

* MSN

* MXit

* MySpaceIM

* QQ

* SILC

* SIMPLE

* Sametime

* XMPP

* Yahoo!

* Zephyr

Pidgin is a great solution to those who have various kind of IM accounts and wanted to chat on all of these IMs without needing to login into all of those accounts at once. With Pidgin, I never had to worry about sending the wrong message to unintended recipient, as turning on too much IM surely can make me feel blur at times.

It is amazing how the Internet and IM had evolved. There used to be only one famous IM…which was ICQ. ICQ have a sentimental value to me, and I even wrote a novel loosely themed around ICQ back in 2001.

Now there’s not only ICQ serving us as an IM, but loads of IM that I couldn’t even remember the names, and things get too overwhelmed for me til I had to resort to using a multi-platform chat client in order to get all my contacts and conversation organized.

It is a wonder… how complicated Internet and things that’s related to it can be, but at the end of the day, all we want is convenience and simplicity. At least, I know I want convenience and and simplicity, that is why when it comes to IM, I choose to use Pidgin instead of login into various messengers all at once. See…. below is the screenshot of Pidgin on my desktop.

Pidgin not only spare me the confusions and headache, but it spare my system from being overloaded as well.

ICQ was the first IM that I used. My knowledge in chatting online started from there. I don’t use it anymore because people started to use other IM and I had no one to talk to on ICQ, and so, I began to use other IMs.

Currently, I’m using Pidgin. What about you? What was the first IM that you used to chat with your friends back then, and what is the current IM that you’re using now? Care to share with me?

Cleffairy: Back to basic, ladies and gentlemen.

ps: You’d notice that as of late, my entries are becoming abit techie…well…I’m in the mood for techie stuff these days, so bear with me, cuz I kinda got bored of just ranting my ass out.

Continue Reading