Archive for the ‘LINUX *NIX’ Category

There is a lot of jargon about BigData. Here is a fast track version (my version) of how Hadoop MapReduce sophisticated algorithm solves the big data issue with an example in each jargon.

I have given a use case of aggregating SYSLOG data coming from thousands of Cisco ASA appliance and then perform data analysis on them using Hadoop  MapReduce. The data size of ASA log in this example is 2 GIG!!

SR# Keyword / FrameWORK

Explanation

1

HDFS (core component of Hadoop) Hadoop Distributed file system

2.

Hadoop (the Down) Java based codes, stores data on HDFS

3.

Hbase (core component of Hadoop) It provides fast key value lookup and built on top of HDFS

4.

Sqoop Data integration from SQL to Hadoop

5.

Pig programming Developed by yahoo to analyse big data sets.Pig programming language is designed to handle any kind of data—hence the name! Pig is made up of two components: the first is the language itself, which is called PigLatin, and the second is a runtime environment where PigLatin programs are executed. Think of the relationship between a JVM and a Java application.

6.

Hive SQL over Hadoop

7.

MapReduce Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes. It is used for processing structured, non-structured and semi structured data. Its output is human readable. It works with key-value pairs and have two major steps – Map and Reduce (see below)
Input Reads data storage – files, DB, images, logs.Input key = file name or could be null, therefore;
Input value: the line itself example: let’s say we have a large dataset of ASA firewall logs files.——Input log file asa.log-2TB.input —–
20111011 /urlYYY #Line1
20111011 /urlTTT #Line3
20111012 /urlYYY #Line2
Slicer Chopped up big data (damn this buzz word) into smaller chunks if asa.log file received in 2 TB in size, it will be chunked into smaller pieces!.———- slice asa..log-2TBinput it up into smaller chunks ——-asa.log.2gig-slice01.log
asa.log.2gig-slice02.log
asa.log.2gig-slice03.log

asa.log.2gig-slice04.log and so on and then writes them on the HDFS

Map Transfers the input data for easy processing and then aggregate it. Here is how this algorithm worksExample1: multiple dataset, mapping of data happens here.——-transfer asa.log.2gig-slice*.log data-set#1..2..3..n —–
20111011 /urlYYY #Line1
20111011 /urlTTT #Line3
20111012 /urlYYY #Line2Total 3 lines but mapped data becomes something like this:

Key1: urlYYY value: 20111011
Key2: urlYYY value: 20111012
Key3: urlZZZ value: 20111011

Still the above 3 lines, but mapped into key-pairs (key and value).

Partition or Sort Sorts the output of the Map operation to transfer to reducers, here we simply sort the above output to transfer the data for further processing.——- ready to transfer asa..logsorted——
Key1: urlYYY value: 20111011
Key2: urlYYY value: 20111012
Key3: urlZZZ value: 20111011linux sort -n command basically!
Reduce Aggregates the data. Here we process the sorted data to handover onto merger.——- aggregate now, asa..logaggregated——
Key1: urlYYY value: 20111011, 20111012 (from dataset1)
Key3: urlZZZ value: 20111011 (from dataset1)
Key9: urlYYY value: 20111011, 20111012 (from dataset2)You see, key 1 and key2 are reduced down to one line. New line key9 came off the other dataset#2.
Merger Merges two or more outputs of Reduces; Since we’ve a large asa.log file (2TB of it), we’ll have a large data sets and there will be duplicate datasets after reduce function (above): We further merge duplicate dataset into one.——- Merge asa.log.merge——
Key1: urlYYY value: 20111011, 20111012
Key3: urlZZZ value: 20111011You noticed, key 1 and key9 (from reduce process) are now merged into one. That is how Hadoop smart processing reduces the storage required for the big data problem.
Output Writes the output of the MapReduce to Disk IO in human readable format.——- output asa.log.output——
Key1: urlYYY value: 20111011, 20111012
Key3: urlZZZ value: 20111011Now programmers can use Pig programming language to write programs to dive deeper into the data.

8

Use Case – who uses Hadoop MapReduce and where Linkedin – if you are a Linkedin user, you may notice the section “you may also know these people”. A hue that comes off Hadoop MapReduce processing. eHarmony – find your matches (weekend fun)
Facebook – recommendation section
Yahoo – big user – search assist and normal search engines
Advertisements

If you are a “pure” network engineer and still have a mortgage to pay, this post is probably for you!. Those who are preparing for CCIE voice or application firewall or F5 load balancer are on the right track.

In short,  SDN (Software defined network, aka SDN cat) products are already out of the bag now. SDN will require no more so called CLI monkey (The Network Engineer)  to configure and monitor the traditional switches/routers. SDN products are coming soon in your local cinema. Here is a typical example of upcoming SDN:

Dynamic changing nature of an IT industry, keeps all of us (in IT) awake till 2am in the morning – this sounds about true! I remember way back in 2010 when I was studying for CCIE Security, I bought an IPS appliance of the eBay and as soon as it arrived on Tuesday afternoon, I was on it until 3AM in the morning. I remember that day it was 2:30 AM in the morning, my neighbor “Mrs Kathy” knocked on my door and asked why I have been vacuuming my flat for the last 3 hours, she couldn’t sleep because of the noise. I said, no I am not vacuuming my carpet at all, in fact, I don’t even have a carpet to start with! She grumbled and said to me, she can still hear a “strange” noise coming out of your flat. I said, oh..  I bought this small machine that might have some strange noise coming out of it. She looked at me and said politely ‘go to bed and have some life”. You know now it is true, we’ve to wake up or study during the weekend to keep up with the never changing world of the IT. This is fun or fuss! It’s your call to get along with it or select another career that is not so dynamic. That is my little real story. Let’s come to the point now!

In this post, I will try to keep everything much simpler than they are hovering over onto us. I think this is about the time to make a decision whether to stay in a pure so called ‘network engineering’ role or move into application, system and virtualization (Cloud is the right buzz word) space.

All Network Engineers should diversify their skill set, those days are not long ahead from now when Employer will stop advertising network engineer’s role. Nobody needs old pascal or clipper programmer anymore these days, do they?

I put that intentionally in “BIG” quotes, there is a reason for that. If you look any role that google or other big  web3 company are advertising for network engineer, they are asking for ‘Perl/python’ scripting. Why is that? Now you probably think, it is a network engineer role, but they are asking for scripting knowledge. This doesn’t sound right. Since when a Cisco or Juniper router need to be scripted. Those who are preparing for CCIE Routing and Switching lab exam would probably have used a skirt of TCL based script to check ping connectivity across the topology in the lab exam, but most network engineers (especially those coming from the small shop)  won’t have a clue about using script on a router.
But wait .….. there is a catch why google/amazon need a perl/python junkie for a networking role. Well, simply because the power of Open Source philosophy.

Google and Amazon are the biggest consumer of networking equipment on this planet that vendors like Cisco/Juniper build for them as well as for other companies. Most cash flow comes off these big companies because they buy switches from these companies – simple. Now, what happens if these big consumers (google/amazon) decide and start building their own switches? You must be thinking what I am talking about. Why would anyone build switches if there are off the shelf switches that they can buy.

The fact is, these vendors have had so good for such a long time. Have had enough milking out of writing software codes. A 48 port switch from Cisco or from Juniper will have approximately the same amount of chips/silicons so the original (OEM) hardware costs almost the same. The silicon chip cost the same no matter which vendor is using it. The switch price is decided by the  cost of the software and feature set. Sounds familiar now with the Cisco IOS and feature sets (voice, security, advance enterprise et. al)?

Talking about these top companies/consumer (google/amazon), the cost is not probably issued with them. They’ve got the money and they can buy any vendor switches.  The issue comes with when they want something to do with the switch but they can’t do. This is because of switch vendor do not release source code with the switch. As we all know, google recruits best of the best mind and have an in-house programming team, so called ‘python/Perl’ programmer.
Cisco never has a merchant switch in their portfolio until the Nexus family of product release late last year. So much customer base (banking/share market/financial institutions) and no merchant switch? You see someone else started building merchant switches and ate up the market share. Time is the money! Google philosophy works in the same way, they want the feature set with a switch and they want it now. Most vendors won’t even look into introducing new features, the reason being their team is so busy in fixing the bug from previous release, have no resources to work on a new feature. Well, the close source world works like this way and it has been since I have been working in IT. It is as it is and as its name implies (close source, black magic).

In this modern day and age, thanks god, things has started changing. Take a step back, had Cisco/Juniper made their networking equipment codes available through GPL license, it would have been easier for anyone to add and remove features that they wanted on the fly. It would have been just like any other open source project that we see on SourceForge.net. Now the game of depending on the vendor  to get a feature set is changing rapidly. Genius brain child at google/amazon have finally decided not to depend on these vendors  anymore to get a new feature that they want today and now. This is fair enough and fair game, just like a kid want to play with a toy today when he’s a kid and need a toy when he’s a kid. It would be meaningless having a bunch of toys stacked in the backyard when you’re 50! Is it? Anyway… The matter of the fact is, google’s traffic is so huge none of the vendor on this planet were able to provide them the right equipment to handle their massive data the way they wanted. This is the only option for them.

As we all know, Google already has a team of engineers working on building their own network switches. They order cheap silicon (from Taiwan) and build their own switches. Are these switches running Chrome OS or Android and what about IOS Feature set? The IOS for these no-name brand switches are based on the standard Linux kernel (version 2.6  oh yeah) and an open stack software protocol  that comes in a tar file OpenFLOW. The answer is no, they run the standard Linux kernel and  *nix variant. The engineer can get root shell access and write their own codes to develop the switch feature set that they wanted today. Is this new, probably, know, this is what open source is all  about. The  magic stick is called “openflow” that is running wild in the open source community to power these no-name brand switches. Now you’d be thinking that if these big giants have started building their switches what the heck another vendor are going to do with their products. Well, believe it or not vendor had already started the race with Google and other web giants. Cisco, Juniper, IBM, HP all has started introducing OpenFLOW feature in their switches:

Cisco openflowhttp://blogs.cisco.com/tag/openflow/
Google’s secret 10 GIG switch: http://www.nyquistcapital.com/2007/11/16/googles-secret-10gbe-switch/
IBM has already released openFLOW based switches – IBM OpenFLOW switches

There are so many advantages of openstack codes running on Taiwanese silicon switches. The main advantages are:
1. Develop your own feature that you want “TODAY” (don’t wait for years for a small feature set)
2. Software based controlled – NO CLI or expensive engineer required to configure a switch.
3. Easy to take switching codes to the next label – End of the vendor war

BTW, if you already not aware of, google G-Scale production network was already on their own homegrown OpenFLOW based switching platform. They’ve figured out how to hook slow (their internal) on openFlow. Full SDN are based on sFLOW and runs on top of OpenFlow.  There are only 2 vendors at the moment who have solved this L2 and L3 issue with sFLOW and OpenFlow. Nircia is the one that comes in my mind who have the full SDN product. Well, they’ve solved the issue at the right time. Cisco and other vendor are still figuring it out. See on the above Cisco link – Cisco has a dedicated coding team to develop openFlow in their switches.

You now have an idea what I am whining about in this post.  Back to our original topic- why next generation’s network engineer should have coding skills and why google and other vendor want a network engineer equipped with the Perl/python toolsets. Now, this obviously makes sense that it’s a fair drinking for Google /Amazon to ask a network engineer “hey do you know Perl/Python”.  These web giants just do not want a network engineer with “show IP route” or “IP route” type  Network Engineers (oh the CLI monkey). They need more bang for their bucks. The above #1 and #2 are the reason why a traditional network engineer will no more be in demand in coming years. The fact is  that  the Openflow and sFLOW based network hardware are going to be software GUI provisioned.  There will be no vendor limitation. You can have Cisco, Juniper, raw no-name brand switches , all managed and provisioned by a single GUI (the sFlow controller). Since it’s  software driven and everyone knows how to click and who clicks knows how to read work instructions too. If everything is gonna be software driven, with a few mouse clicks now an HR lady could easily provide SDN powered switches/routers.  SDN powered switches/routers can be shipped on site with no configuration. A sparkee plugs it in at the site when he goes to do the cabling and SDN powered (sFLOW) controller finds and automatically pushes pre-template configuration. These templates are created by the Perl/python type network engineers. The HR lady now can easily select a template and push the configuration with a few mouse clicks to the newly plugged in switch.  Sometime she might get too busy and  can easily schedule provisioning task for midnight. During the day she could focus on her  regular HR tasks.

And what about those template nerds? Once these templates are created, slow/OpenFlow powered switches start configuring themselves within the SDN frame work. What would these nerds (Perl/python type network engineers) do after they have created  the required template? Who is going to monitor and troubleshoot  those newly shipped switches? The well HR lady can’t do that, she only checks and does according to the GUI work instructions (WI)? The answer is SDN taken care of all these tasks. There is no room for mistake because all kinks are already tested and taken care well in advance by Perl/python powered network Engineer. When HR lady and  the nerd wake up  and get to the work the next morning, they saw already 90% traffic load creeping on these  switches in their production network.

Is it just the fantasy or I have lost my mind now thinking about such a crazy thing? What would this type of network engineer will do all day long at google office if everything is gone templates and automatic. HR lady is doing provision work and she can easily cron/schedule it for the night time. Well,  the answer is these nerd in google office you’ll see doing the other innovative work rather than supporting customer (as a traditional network engineer does) for simple things like switch port is not configured, oh port suppose to be a trunk, config errors. Now you’d probably think that I might have been on high when I wrote this posting, pun indeed.  Some of you might have got an idea and may be thinking like – well, I am a network engineer switching part is eaten by the openFLOW revolution, but I will still be able to get a job on Routers. Someone needs routing to do if not switching. Well, don’t kid yourself here mate, open vSwitch/sFLOW  powered SDN products are already on the horizon. These new technologies are baking at a very fast rate – probably on 300 degree Celsius in  a microwave oven. They (the stable SDN) will be out in the wild before we could even think. As  usual, you probably noticed,  early production of mangos do not taste as good as the later or natural version. The same concept applies here, the current version of sFLOW  based  SDN products are quite buggy.  The open vSwitch is a combination of NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag.

Nicira already have vSwitch product out and available today. The software controlled switches (SCN) are the current market trend. SDN will be the next generation network for bigger enterprise. We’re very close to experiencing SDN in real life. (update April 2013, btw, VMware bought this company and added to their portfolio)

Source: nircia.com

I remember, 5 years ago when VMware started doing virtualization, nobody will put their SQL or Exchange server on VM.  DB/APP guys  will go ballistic if you even whisper  SQL VM to IT manager.  Hasn’t that changed now? Oh yeah!, the matter of the fact is these days you have no choice but virtualization.  This is exactly what we would see – Software powered network, the SDN in action, taking networking world to the next level. No offense here, but this is one of the reasons network engineers should multi-skill themselves!. You could argue anything for political shake, but you can’t argue against what is going to be the future trend and you’ll feel and experience these new and upcoming SDn stuffs. They’re going to hit everyone, so did the cloud and dotcom bubble in the past, nothing new here.

In next, blog entry I will cover up OpenFLOW architect and some scripting features.

Cheers, Push
4xCCIE (voice/security/SPv3/DC)

Update 17 April 2014 – VMWare in 2013 bought Nircia and SDN has started kicking in!

If you live in Australia you probably have heard about the Mining Boom, so was the famous “DotCOM bubble” in way back in 199x . Recently, this new buzzword “BigData” in IT data mining space seems to be a new technology trend for many enterprises and Telco’s. They need bigdata to be implemented to solve their traditional issues ASAP. As we all know Cloud computing has made life so easier and now it looks like we’ll be doing clouds forever! End of the day who want to wait for the delivery of their servers , rack-n-stack and then build. Takes forever!.  Obviously, this is totally new era in application space after the cloud boom that bring up new IT jobs and solves traditional problem of big data sets. Hue..? new jobs! good for IT industry, we’ll never be out of the job. Most big data products are based on Hadoop, Splunk, Cloudera and uses smarter algorithm to index the data and present it onto human readable format to IT analysts. The Hadoop dominate the big data space. SPlunk and Cloudera both top products that are availabile today are based on Hadoop. I have some experience with Hadoop deployment. A year ago (before this buggle started) I had an opportunity to deployed 37 nodes Hadoop cluster for parallel processing of un-structured data. I know the hurdles and challenges we went through in deploying in this domain. I think Hadoop codes have matured over the period of time. IT engineers who works in big data domain have the following titles/role:
1. DATA SCIENTIST  (probably eq. to CCIE?)
2. DATA ANALYST (probably eq. to CCNA)

Data scientist – you may be joking here! Nope! Keep reading…
Here are my thoughts on these newly defined roles for folks who are or will be working on Big Data domain. Who are these data scientists working on Big Data? An industry accelerated PhD’s?, with multiple masters degree?, folks who have no university degrees? Well, the simple answer is “anybody”, it doesn’t need a PhD degree to get a title of “Scientist”. Funny but this very true! Personally I love the job title or term “Data Scientist”. It has certainly made the folks who are really smart and working in IT industry (without any degrees) job title glamorous! It has given both name and fame to the role. Don’t get me wrong here but many organizations have started hiring a data scientist to solve their structured and non-structured data problem. Mostly, data scientists work on futuristic products. New product development requires some data to correlate inputs that comes from big data co-relation.
DJ Patil , the co-inventor of this term defines data scientist as:
“A data scientist is that unique blend of skills that can both unlock the insights of data and tell a fantastic story via the data.”
Jake Porway,
Data without Borders and the New York Times defines it as:
“A data scientist is a rare hybrid, a computer scientist with the programming abilities to build software to scrape, combine, and manage data from a variety of sources and a statistican who knows how to derive insights from the information within. S/he combines the skills to create new protoypes with the creativity and thoroughness to ask and answer the deepest questions about the data and what secrets it holds.” Data Scientist needs strong data skills, strong knowledge of statistics and ability to program algorithms.
WHO HIRES DATA SCIENTISTS?:
Anyone invests in BigData products – Big Banks, Telecos, Manage IT service companies etc.
Data Scientists are not cheap either! Obviously, you get what you pay.

Data Analyst – Every big data problem doesn’t need a data scientist. Now if you are just starting in big data domain, you might have to start your career as a Data Analysist. Remember your career on a NOC engineer role, first few months at job, night shifts and tame at looking at the screen for SNMP/SYSLOG Traps? Well this role will be a bit upgraded version of that but you’ve opportunity to become a data scientist. Not to mention but an opportunity to learn from data scientist as well. Every analyst needs to be able to tell and sell his story from the insights that come out of big data analysis. A data analyst is not expected to having programming skills to build algorithms, but needs strong SQL skills in addition to good understanding of analytics packages. Typically data analysis engineer is cheaper than data scientist.

BIGDATA MARKET:

1. Online Platform companies.
2. Content sites
3. Big Banks – fraud detection, app logs, data correlation et all.
4. Parallel processing of data that can not be processed by traditional databases (SQL,Oracle, Informix et. all)
5. Share market – and a list goes on! Infinite possibilities.
Not to mention but the original user or abuser of Hadoop are using it for ages! – Yes LinkedIN, FaceBook, Google, Yahoo.
Please leave your comments. What do you think of this new buzzworld! In the next post (when? probably when this bubble is gone), I will cover building career in big data domain.

Cheers, Push
2xCCIE (Voice/Security)

Recently, I switchover my choice of Linux destroy from Ubuntu to Open Suse 11.2 and tried connecting to the internet using USB dongle. Kmanager is a default connection maker for using the USB dongle. It works at some stage but its really dodgy and pain!! Sometime it will connect but most of the time it will not. Everytime you connect a USB dongle to the laptop, the SUSE desktop assigns a new USB port id e.g. /dev/ttyUSB0, /dev/ttyUSB3 etc. The Knetmanager takes a while to re-locate the port id.

To over come with above knetmanager issue, I came up with my own scripts which works with wvdial.conf. And it works really great. Make sure when you give at least 1 minute between success connection attempt. If you connect, then disconnect and then reconnect withing 1 minutes, you will see the error:

/var/log/messages

er
Jan  4 16:26:49 linux-kz77 modem-manager: Got failure code 3: No carrier
Jan  4 16:30:28 linux-kz77 modem-manager: Got failure code 3: No carrier
Jan  4 16:31:12 linux-kz77 modem-manager: Got failure code 3: No carrier

and your wvdialer will show like this:
linux-kz77:/etc # wvdial optus1
–> Ignoring malformed input line: “connect “/usr/sbin/chat -V -f /etc/ppp/optus3g””
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
ATQ0
OK
–> Re-Sending: ATZ
ATZ
OK
–> Modem initialized.
–> Idle Seconds = 300, disabling automatic reconnect.
–> Sending: ATDT*99#
–> Waiting for carrier.

The “waiting for carrier” means, the USB dongle is waiting for a Carrier signal from 3G base station (UMTS or whatever they have).

Here are the steps:

Step1:
Make sure your PPP and wvdialer daemon RPM is installed.

linux-kz77:/etc/ppp # rpm -qa | grep ppp
ppp-2.4.5.git-3.1.i586
linux-kz77:/etc/ppp # rpm -qa | grep wvdial
wvdial-1.60-64.1.i586

Step 2

backup your wvdial.conf and add a connectoin using terminal/cmd prompt.

cp /etc/wvdial.conf /etc/wvdial.conf.bak

touch > /etc/wvdial.conf

vi /etc/wvdial.conf    (and paste below lines to wvdial and save it)

Note: Usernme and passwords are dummy. The phone number is *99# for all 3g providers in Australia. Make sure you check your port ttyUSBn.

To find out which USB port is being used by USB dongle first use ‘tail -f /var/log/message’ and then plug in the USB dongle into your laptop.

tail -f /var/log/message
Jan  4 17:21:33 linux-kz77 modem-manager: (ttyUSB3) opening serial device…
Jan  4 17:21:33 linux-kz77 modem-manager: (ttyUSB3): probe requested by plugin ‘Huawei’

Also note, for simple communication, USB dongle uses 3 ttyUSBx ports. One for data one for command and other for carieer.

linux-kz77:/etc #
linux-kz77:/etc # more wvdial.conf

[Dialer Defaults]
Modem = /dev/ttyUSB3
Baud = 412000
#Init1 = connect “/usr/sbin/chat -V -f /etc/ppp/optus3g”
#Init3 =
#Area Code =
Phone = *99#
Username =test
Password =test
Ask Password = 0
Dial Command = ATDT
Stupid Mode = 0
Compuserve = 0
Force Address =
Idle Seconds = 300
DialMessage1 =
DialMessage2 =
ISDN = 0
Auto DNS = 1

[Dialer optus1]
Modem = /dev/ttyUSB3
Stupid Mode = 1
Baud = 460800
Init10 = lock
Init11 = crtscts
Init12 = modem
Init13 = noauth
Init14 = defaultroute
Username  = guest
Password  = guest
connect “/usr/sbin/chat -V -f /etc/ppp/optus3g”
Init30 = noipdefault
Init31 = usepeerdns
Init32 = nobsdcomp
Init33 = novj

linux-kz77:/etc #

To make a connection:
linux-kz77:/etc #
linux-kz77:/etc # wvdial optus1
–> Ignoring malformed input line: “connect “/usr/sbin/chat -V -f /etc/ppp/optus3g””
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
ATQ0
OK
–> Re-Sending: ATZ
ATZ
OK
–> Modem initialized.
–> Idle Seconds = 300, disabling automatic reconnect.
–> Sending: ATDT*99#
–> Waiting for carrier.
CONNECT
–> Carrier detected.  Starting PPP immediately.
–> Starting pppd at Mon Jan  4 16:31:51 2010
–> Pid of pppd: 12229
–> Using interface ppp0
–> local  IP address 122.110.83.192
–> remote IP address 10.64.64.64
–> primary   DNS address 61.88.88.88
–> secondary DNS address 211.29.132.12
–> Script /etc/ppp/ip-up run successful
–> Default route Ok.
–> Nameserver (DNS) Ok.
–> Connected… Press Ctrl-C to disconnect <— to disconnect

Some troubleshooting:

Sometime when you make frequent changes in wvdial.conf , all other scripts in /etc/ppp/ needs to be re-initialized. It takes some time. To quickly overcome from this issue, copy wvdial.conf.bak (fresh unedited wvdial.conf file) to /etc/wvdial.conf and then run wvdial <enter>. Then launch the wvdial <connection name>

#
linux-kz77:/etc #
linux-kz77:/etc # wvdial optus1
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
–> Re-Sending: ATZ
–> Modem not responding.
linux-kz77:/etc # wvdial optus1
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
–> Re-Sending: ATZ
–> Modem not responding.

linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # wvdial optus1
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
ATZ
OK
–> Modem initialized.
–> Sending: ATDT*99#
–> Waiting for carrier.
^CCaught signal 2:  Attempting to exit gracefully…
–> Disconnecting at Mon Jan  4 16:53:08 2010

^X

^C
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # wvdial optus1
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
^CCaught signal 2:  Attempting to exit gracefully…
ATQ0
OK
–> Re-Sending: ATZ
ATZ
OK
–> Modem initialized.
–> Disconnecting at Mon Jan  4 16:53:20 2010
^C
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # cp wvdial.conf_working1 wvdial.conf
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # wvdial optus1
–> Ignoring malformed input line: “connect “/usr/sbin/chat -V -f /etc/ppp/optus3g””
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
ATQ0
OK
–> Re-Sending: ATZ
–> Modem not responding.
linux-kz77:/etc #
linux-kz77:/etc # cp wvdial.conf
wvdial.conf           wvdial.conf.bak       wvdial.conf_working   wvdial.conf_working1
linux-kz77:/etc # cp wvdial.conf.bak wvdial.conf
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # wvdial
–> WvDial: Internet dialer version 1.60
–> Cannot open /dev/modem: No such file or directory
–> Cannot open /dev/modem: No such file or directory
–> Cannot open /dev/modem: No such file or directory
linux-kz77:/etc #
linux-kz77:/etc #
linux-kz77:/etc # cp wvdial.conf_working1 wvdial.conf
linux-kz77:/etc # wvdial optus1
–> Ignoring malformed input line: “connect “/usr/sbin/chat -V -f /etc/ppp/optus3g””
–> WvDial: Internet dialer version 1.60
–> Cannot get information for serial port.
–> Initializing modem.
–> Sending: ATZ
–> Sending: ATQ0
ATQ0
OK
–> Re-Sending: ATZ
ATZ
OK
–> Modem initialized.
–> Idle Seconds = 300, disabling automatic reconnect.
–> Sending: ATDT*99#
–> Waiting for carrier.
CONNECT
–> Carrier detected.  Starting PPP immediately.
–> Starting pppd at Mon Jan  4 16:54:33 2010
–> Pid of pppd: 13188
–> Using interface ppp0
–> local  IP address 122.110.41.160
–> remote IP address 10.64.64.64
–> primary   DNS address 61.88.88.88
–> secondary DNS address 211.29.132.12
–> Script /etc/ppp/ip-up run successful
–> Default route Ok.
–> Nameserver (DNS) Ok.
–> Connected… Press Ctrl-C to disconnect

 

linux-kz77:/etc #
linux-kz77:/etc # dmesg | grep sierra
[   11.602567] sierra 6-1:1.0: Sierra USB modem converter detected
[   11.604800] usbcore: registered new interface driver sierra
[   11.604802] sierra: v.1.3.7:USB Driver for Sierra Wireless USB modems
linux-kz77:/etc #
linux-kz77:/etc #

 

-Push Bhatkoti

Just think, your lab is  in a week’s time and you’ve lost all your virtual machines (pub, sub, unity etc.?). The ESX server doesn’t boot!! Hue.. Happy birthday!

Well, I put about 15 hours to figure out how to bring up the crushed VMWARE ESX server.

Root cause of server CRESH:

I tried mounting 1TB disk to VMWARE but VMware ESX server kernels and anaconda doesn’t have UCCI support for USB devices.
As a result, it crashed the VMWARE modules and I couldn’t recover it by using several methods, including – booting from redhat/fedora/centos disk/, esxcfg-boot -p -r stuffs. Nothing seemed to work.
I found that the /etc/pam.d files were gone when I chrooted the ESX server hard drive. Also in /etc/sysconfig/network-script/  -the folder was empty.

Solution:

It was damn easy. I wasted a whole Saturday night trying to figure out the best method. The challenge was I had unity, epic, CCM 4 to 6 VMware images on the server. So building from the scratch wasn’t an option.

I tried everything but still no luck till 4am.  I took some fizzy drinks and had a quick nap of 20 minutes and then resumed what I was doing. A new idea came up in my head. Google didn’t tell me what to do and any help url. I popped in the VMware ESX 3.0.1 CD to my other laptop’s syndrome and started installing it.  I saw there is an option when you install a new server which says  tick on “do you want to keep existing .vmx/datastore“, just ticked there.  I was so relaxed after seeing that option.  I took that CD out of my laptop and popped into the ESX 3.0.1 monster server (ML580) and started installation. During installation there is an option, which says “keep your vmstore” and wipe out everything. Viola… Here you go, the mother of all trouble was  just sitting there in ESX portion option.

So I choose > install (not upgrade)>> then ticked on that keep old VMware-store in baton preparation step and continued the installation procedure.

In a nutshell, above exercise installed VMWARE ESX server and  also retained old partition which had my all VMware images.

The whole above process only took 40 minutes.

After installation finished, the server boot up with a flashy screen asking for a password. After I punched my password immediately I checked.vmx files and found they are there…

Sometimes it’s too easy task, but we seem not finding it right away. I hate vendor not documenting things like that. Yes VMware has poor documentation which wasted my Saturday night staying at home and doing nothing but playing with the VMware esx box.

Anyhow, it was a good fun and i really enjoyed as there was  a challenge and it was good to find out a solution by myself.

Cheers, Push

How to transfer files (GIGS in size) between  ESX 3.0.1 or 3.0  server as an FTP client

A friend of mine asked me if I can sell my CCIE server to him for $1500. Normally I won’t sell it, rather keep it as a monument, after I passed all my lab using this server.
But after several requests, to keep friendship I finally agreed to sell my server. After that the challenge was that I had to take all virtual machine backup.

On last Friday I bought a 1 terabyte (TB) hard disk (for AUD $200) and tried transferring files using winscp. I couldn’t transfer those files because those were big.

So finally I had to install vsftp in the ESX server and make a softlink of my /vmfs to a ftp users home directory then download using “File zilla ftp client”

Hardware details: HP ML-580, Dual Xeon 3.0 processor, 4 gig RAM. Loaded with CCM pub/sub, unity and IPCC.

Here is the procedure:

  1. SSH to ESX server – issue ‘chkconfig iptables off’ , service iptables stop (its culprit)
  2. Download vsftpd-1.2.0-4.i386.rpm
  3. Copy vsftpd-1.2.0-4.i386.rpm to your ESX server
  4. SSH to your ESX server console using a root account (make sure in /etc/sshd/sshd_config has root account enabled else use below method)
  5. After logging in using a normal user, become the root user (su -)
  6. Install the FTP server rpm -ivh vsftpd-1.2.0-4.i386.rpm –nodeps
  7. vi /etc/vsftpd/vsftpd.conf
    • Change anonymous_enable=NO (for security)
    • You need to add the following lines to force passive mode to only use ports 2050-5000.
      pasv_min_port=2050
      pasv_max_port=5000
  8. Start the FTP server with a service vsftpd start
  9. Create a user. Do not use ‘anonymous’ or ‘ftp‘.
    User and Pass cannot contain symbols like @ (single quote)

    • adduser mybackups.
    • passwd mybackups.
  10. To have the ftp server automatically start on boot, type:
    chkconfig –level 3 vsftpd on
  11. Remember you will probably have to change the permissions on the /vmfs/volume you are using as a FTP destination.
    chmod 1777 /vmfs/volumes/(UUID of Folder)
    You can also link the path to a simpler name, such as:
    ln -s /vmfs/volumes/UUID-NAME/backups /backups
    Then use just ‘/backups’ in the express configuration.
  12. Use filezillaftp  client to download the files from ESX server

Gotchas?

Make sure that you have NTFS or Ext2/3 file system where you are going to store the files.
I had FAT32 and it won’t copy more than 4 gig file size. It will come up with an error saying that no desk space. How stupid is this error message, after all I had 1TB of HDD and still got that message that can’t, copy the file, disk running out of space.

Here are the limitations:

fat16 – 2 gig max
fat32 – 4 gig max
NTFS – Above 2TB
EXT2/3 – 2 TB

That’s all.

Cheers, Push