Apple, Embrace Android!

Have you ever opened up the standalone Apple Music app on an Android device and asked yourself, “I wish we could get something like this on iOS!” Oh, you haven’t? It must be because you’ve only ever experienced Apple Music on an iOS platform (or don’t use Apple Music). Well, let me be the first to tell you that the experience on Android is, in my opinion, better than it is on iOS.

This got me thinking — Apple should be considering bombarding Apple services onto Android.

Yes, Apple did sell less iPhones than they had year over year but was able to make more money because of the rising cost of iPhones. That’s not always going to happen nor is it the point. The point is that they could be making more money. That money could be made in bringing a few Apple apps (and a piece of hardware) over to the Android ecosystem… for a price.

Yes, there are a number of applications out there that you could use for cross-platform messaging. WhatsApp, Hangouts, Kik and Snapchat are obvious cross-platform applications. However, what many Apple enthusiasts would love to see is a way to continue iMessaging with their friends that happen to use a non-Apple phones or be able to utilize the built in messaging application on their laptop despite owning an Android phone. There are plenty of other use cases, like being able to allow people whose workplace strictly use iOS devices to continue conversations on their personal devices or enabling cross-platform message syncing on iOS, macOS and Android devices.

Apple could also expand it’s iMessage App Store to the Android version of iMessage to create an entirely new market of consumers that didn’t already have access to stickers and experiences. with having to have an additional app, as it could all be accomplished in-app.

What would be the sweet spot for Apple to charge for a service like this? I’ve thought about this for some time, as the price would have to be small enough where people would just pay for it and forget about the fee almost instantaneously but be enough to be part of a portfolio of Android experiences that would be their next billion dollar business. Two bucks a month would be the sweet spot for this. It’s just enough for people not to care that it’s a thing they’re paying for and adds enough value to where no one really would bawk at that price point.

FaceTime is just a fancy video chat app, albeit one that has been deeply ingrained into the iOS, macOS and iCloud DNA. While I would like to sit here and say that it is better than Hangouts, WhatsApp or Skype, the experience that I have had with FaceTime is that isn’t really isn’t any better than those other apps. Ultimately the connection to the service makes the biggest difference in how nat video chat application functions.

The beauty of FaceTime is that it is simple. Open the app, select your contact and off you’re going. Hangouts isn’t that simple and Skype (in my experience) always disconnects calls unexpectedly. It is that very simple workflow that has allowed adoption of the technology to blossom.

However, that simplicity is something that is a hard sell to make it a paid app. If anything, it should be built into an iMessage bundle that, if an iMessage subscription is active on the Google Play account should just work. FaceTime as a stand alone product would be a hard sell but I could see people spending a couple bucks for access to it if they didn’t want any other iCloud feature.

iCloud Mail and Calendar
Have you ever searched the Google Play for iCloud? You should head over to the Google Play store and do this search. There is a fledging market of third party applications that are trying to fill this void. We talk all the time about how credential security is so important and yet I don’t think anyone could tell you how the applications that you see on the Google Play store are using your credentials to provide iCloud services to Android devices. My guess is that they are just using an IMAP connection, which is the only “approved” way to make that connection outside of Apple’s ecosystem.

iCloud Mail and Calendar is the only email and calendar service that I can think of in these modern times that has a hardware restriction on usability. IMAP is a poor excuse when writing an Android application for native use isn’t hard, nor unmaintainable. I get that Android is your main competitor, however access to email and calendar services can be done through any desktop web browser. Native applications for Android would be a welcomed change of heart for those who can’t ditch their iCloud accounts despite being on Android or that have other Apple hardware in their life that they use iCloud services on.

Creating these applications, iMessage, FaceTime and iCloud Mail and Calendar, also has an added benefit of being able to also function as Chromebook applications. While I completely understand their hesitation to provide applications for Chromebooks, it also underscores that Apple doesn’t exactly have an “education” option anymore. Opening up a revenue stream to lower-end consumers who may happen to have an iPhone but not a macOS device would be extremely beneficial for their cloud operations.

This begs the question: Why not take all those iCloud services (Mail, Calendar, iMessage and FaceTime) and create iCloud Suite for Android. Not only does that brand all of the services into a nice, complete package, it would also make billing for these services easy. Put a monthly and Annual fee for all four applications and you’ve got something that would make Android users able to have access to rich Apple services and use that as a gateway for people to eventually switch to iOS for “native” use. It could even come in a bundle with Apple Music (or included with Apple Music if so inclined) so people could stay in their all Apple environments with the exception of an Android phone. $5 per month or $50 per year for this alone would be well worth the cost of admission.

The only reason that I can think of as to why this hasn’t already happened, besides the obvious “because Android isn’t iOS,”is because Apple isn’t hosting their own cloud on their own platform. Contracting out to Azure, Amazon S3 and Google Cloud services means they aren’t hosting any of that infrastructure on a wholly owned Apple platform. This could increase the cost of providing these services to their existing customer base by allowing more people to have access to these cloud applications.

Apple Watch + Watch + Health
Apple Watch, despite what people think about it, is one of Apple’s newer successes. Apple Watch users can opt into being part of medical studies, some insurance providers are starting to give them out at incentives for being more healthy and you can find people wearing them everywhere.

Have you heard the latest news about Android Wear? It’s kind of a mess right now and the 2.0 edition of Wear that came out late last year is just as disappointing at it’s 1.x counterpart. I should know — I waited for ages for Hauwei to release their build for their Watch only to find out it really didn’t make the experience better. Why wouldn’t Apple want to capitalize on Google’s OEMs problems and bring the most successful smart watch in the market to Android, opening them up to an entirely new market? It almost makes too much sense.

If Apple really wanted to make this on part with it’s iOS counterpart, the would want to bring over the Health app along side the Watch app for watch management. Apple would then be able to add more people to their health initiatives overnight. More datapoints for their studies makes this worth the investment time. Add in all the revenue from more Apple Watches flying off the shelf and Apple may be able to become the smartwatch vendor.

Are there any other Apple products that you think should make the jump to Android?

Genuine Surprise, Thanks Nintendo!

If you keep up with the tech blogs, you would know last week was E3 week. E3 week is something that I have looked forward to every year for as long as I have been a gamer. I remember fifteen years ago in middle school getting my copy of Electronic Gamer Magazine, Nintendo Power and Game Informer to read as much as I could about the games I was going to want to buy with my paper route money. Without the luster of new consoles, the past few years hasn’t brought anything exciting to E3. Every year, like clockwork, we see a new iteration of Call of Duty, a new iteration of Battlefield, a new iteration of Assassin’s Creed and a new iteration of a long standing franchise that I couldn’t care less about.

Nintendo has, for the better part of a decade, been an outlier. Nintendo has never had the most powerful console on the market. Nintendo has always had unique ways to play games across the Wii, Wii U, 3DS and Switch. Nintendo had the market in the palm of their hand with the Wii but couldn’t capitalize on the Wii U and has experienced a shrinking handheld market thanks to the rise of the smartphone. Nintendo, to their credit, found their niche in amazing first part properties (see: Mario, Zelda, Pokemon, etc.) and third party titles that you couldn’t find anywhere else. Nintendo has even gone to the extent of creating new IPs, like Splatoon, to bring new audiences into their ill-fated Wii U.

One thing that the last ten years was missing was a new Metroid game. Ok, that isn’t entirely true. Metroid: Other M and Metroid Prime: Federation Force are two entries in the Metroid franchise that people would rather not discuss. Metroid: Other M was a strange title that came with mixed reviews and played more like Ninja Gaiden than Metroid. Federation Force was a handheld game that had almost nothing to do with Metroid. It was almost like Nintendo wanted to create a new IP but decided late to slap the Metroid name on the title and call it a day. Did you know that Metroid turned 30 last year? It’s not entirely sure that Nintendo did, as they focused on Zelda’s 30th anniversary instead. It’s almost as if Nintendo wanted us to forget about Metroid.

Entering E3 for the last ten years, many people’s dream Nintendo announcement was a new Metroid Prime game. The Metroid Prime series debuted on the Gamecube and was the first of a series of first person adventure games. It was very similar to the adventure platform game of NES and SNES past. The series spanned three titles. They were all excellent. After Other M flopped, every E3 people wanted a new Metroid game. The Wii U came, the Wii U went. No new Metroid game. At E3 2015, Nintendo trolled everyone with the announcement of Mtroid Prime: Federation Force. A large part of the community though that maybe they were only showing the mech gameplay off and that a real Metroid title was just not being shown off. That wasn’t the case.

June 13, 2017. Almost 10 years after Metroid Prime 3 was released, Nintendo surprised the fans of the Metroid series. After confirming that the first ever core Pokemon RPG was coming Switch, a first for the series, Nintendo dropped a teaser trailer for Metroid Prime 4. A new Metroid game coming in 2018. Later on in the day, a new Metroid game was announced for 2DS/3DS that is a retelling for Metroid II.

I was genuinely shocked that, at an event where we all thought we were not going to get a great showing from Nintendo, after they tapered expectations with a less-than-stellar Pokemon Direct a couple days before E3, blew all expectations out of the water. Between two new Metroid Games, a Switch Pokemon title confirmed, fantastic gameplay from Splatoon 2, Super Mario Odyssey and all the fun “eSport” competitions that they had on the show floor, Nintendo won E3.

So you want to become a VMware Certified Professional…

Last Friday I obtained my fourth VCP certification, VMware Certified Professional 6 – Data Center Virtualization. While my motivations for obtaining this certification was renewal of my other three certifications that I have, it also made me think that I should really start thinking about getting a VCAP, the VMware Certified Advanced Professional. There are two variants, design and implementation. However, we’re getting ahead of ourselves. How do you become a certified Professional?

The prerequisites for this are as follows (for new or expired certification holders):

1. Take a VMware sanctioned training.

2. Complete the vSphere 6 (or 6.5) Foundations Exam

3. Complete the VMware Certified Professional 6 – Data Center Virtualization Exam

Sounds easy, right? Right? Wrong. I spent over 60 hours studying for this exam, possibly more if you counted al the work hours architecting and designing a greenfield deployment of vSphere and vCenter 6.0 U3. Having 8+ years of experience helps but there are parts of vSphere 6, like vSAN, which I didn’t know a lot about. The jump from vSphere 5.5 to 6.0 is actually much more significant that I remembered thinking it would be. Yes, the vCenter jump is quite extraordinary but I remember thinking, “There isn’t that much new in vSphere 6.0, right? Right?” Wrong.

There are many ways that you can study for this exam. While my path is unique to me, there are several different ways that you acquire this knowledge. Find what works for you. If reading tombs of text isn’t your thing, use vBrownbag sessions, the VMware HOL and sites like Pluralsight to supplement the required classroom work. That said, read every last piece of documentation that you can. It’s a non-negotiable requirement. The amount of little tidbits that you’ll come across that will inevitably end up on the exam.

Here is the list of materials I used to study for my exam:

You may be asking yourself, “No labs?” Labs are a very important art of studying for this exam. If you have the resources to set up a lab, do it. It’s great experience that you can’t get from reading or listening to lectures. I had the opportunity at work to architect and deploy our new vSphere 6 environment from the ground up. We are a big enough VMware deployment where we use Auto Deploy and PowerCLI non-stop.

Another thing to note is that, if you are a complete newbie at this, the vSphere 6 Foundations course that Pluralsight access would get you can assist in building  a nested lab. While I’m not going to say it’s something everyone can put together easily due to resource constraints, it does look to outline the process. For anyone wanting to see the power that VMware Workstation has to offer, this would be it.

In the event that you can’t build a lab, go to the VMware Hands On Labs. Their hands on labs are actually quite fantastic. When I was studying for the VCP6-DT test, I used the HOL portal quite a bit. The HOL portal has labs for days. It’s just a matter of finding the ones that you want to focus on.

Edit: You’ll need keys for you lab. I suggest getting the VMUG Advantage, which will get you lab keys. Here is what the EvalExperience gets you. It’s very much worth the cost of admission (and see if your employer will pay for this!)

Now, you may be asking yourself another question: “You have to renew these?” Indeed you do. When I got my first certification, I was told that certifications expire “when the next major version comes out.” Then, about six months later, VMware instituted a 24 month expiration for all their certifications. While I look at it as a reason to continue looking at VMware’s stack and attempting to get better at it, others complain it is a money grab. I get it. Hopefully if you’re looking to get this certification, your employer will be picking up the tab.

Another thing I ought to mention, is that after you get your first VCP certification, you drop two requirements of taking the next exam. With a renewal, you don’t need to take a course or take the foundations exam again. However, if you let your certifications expire you need to retake the Foundations exam and take a course to obtain the certification. I always put a calendar event for 6 months before the certification expiration date so I know I need to start thinking about this.

I ended up getting a 425. You need a 500. I know the questions I didn’t get because they were areas that I didn’t study or review as thoroughly as I would have liked. I was really hoping for a 450 but I can’t complain. I came, I saw, I passed this exam. I hope this guide gives you a starting point to becoming a VMware Certified Professional!

Edit 2: You need a 300, not a 500 to pass. 500 is the max score you can get on the test. Thanks /r/vmware for pointing this out. I am fail. :p

I wen’t away…

So, there doesn’t seem to have been any activity on this here blog.

I wen’t away.

Ok, I didn’t go away but I did have a major project that I had to get started on that took over much of my time these last couple months. The result of that project was a new VMware certification, so I can’t complain too hard. 🙂

Expect more content to land here now that I will have a lot more free time to write down my thoughts.

First up: a new challenge I am going to undertake. More on that tomorrow (well, later today since it is already well past 0:00 local time).

Relearning Linux…

Today have started setting up a home lab. I have started to feel inadequate about my ability to properly work with Linux servers. I haven’t worked with a distribution in any capacity for some time now. Outside of VMware Linux (eg. ESXi), CentOS appliances have been the most Linux that I have allowed myself to be exposed to.

In setting up auto-deploy again with an architect that I work closely with, I’ve come to the realization that I need to brush up on this. So, armed with a copy of VMware Workstation Pro 12, I have began this new build. My hope is to have a number of services up and running.

Tonight, I am tackling DHCP and DNS. Wish me luck… I’ll let you know how it all goes.

Fixing ‘Can’t have a partition outside the disk!’

Another day, another strange bug. This is just how my life works, apparently.

When deploying newer versions of ESXi on Cisco C-series servers that use Cisco’s FlexFlash SD card storage (although, admittedly this could be any server vendor) you may run into this fantastic error message that reads “Can’t have a partition outside the disk! Unable to read partition table for device.” This essentially means that the format in which the volume on the storage isn’t able to be used for an ESXi installation.

In looking into this issue we found Cisco KB CSCus51007 and countless blogs that tell you to do one of four things:

  1. Install ESXi 5.5U1 Customized Image and then upgrade to the flavor of ESXi of your choice.
  2. Take the SD card out, put it in a different computer, re-partition it (or make it blank) and then install ESXi.
  3. Boot into GParted and re-partition the SD card.
  4. Insert the SD card into a working ESXi host and use the recovery console shell to format the SD card.

I thought about this for a moment and came to a hypothesis. Is it possible to get to the recovery shell from the ESXi installer? Knowing what I know about how ESXi works, it just loads everything into memory at boot. The installer has to work the same way, right? So, when I booted up my image of 6.0 U3, I got to the window where you select your disk and wrote down the C#:T#:L# of the SD card volume and hit ALT+F1.

This was a triumph. One may even call it a huge success. The recovery console was available. A coworker of mine came over to see what I was up to and I explained what I was doing. He was as intregued as I was, as storage is his jam. I logged in as root (which in the installer, has no password set) and punched in lsLS showed us that there was a /vmfs/ directory, just like a live version of ESXi.. In /vmfs/devices/disks/ we found our device.

The command you need to run to convert the volume is as follows: partedUtil mklabel “/dev/disks/deviceID” gpt. An example would look like this: partedUtil mklabel “/dev/disks/mpx.vmhba11:C0:T0:L0” gpt. Wait, why isn’t /vmfs/devices/ in the path name? /vmfs/devices/ is actually a symlink to /dev/. The command, from our testing, doesn’t even work when you use the symlink.

From there, hit ALT-F2 and re-scanned the storage for the installer. ESXi 6.0 U3 installed without issue. I hope this helps some admins in the future, as the resources out there on this problem aren’t great! A TL;DR is below with steps. Enjoy!

TL;DR: just give me the fix!

  1. Boot into ESXi’s installer
  2. Get to the disk selection screen and take note of the disk identifier
  3. Hit ALT+F1
  4. Navigate to /dev/disks/
  5. Use ls to find your disk identifier
  6. Use the following command to convert the disk into GPT: partedUtil mklabel “/dev/disks/deviceID” gpt
  7. Install ESXi

Disclaimer: If you break something using this, I take zero responsibility. Any advice you take from me is on you.

VMware: Why Customize?

Today I deployed a greenfield (enterprise speak for “brand new without needing to think about past deployments”) vCenter Server 6.0 deployment. Those words don’t mean too much to most people, however for those VMware admins out there they are like birds singing on a summer morning, with blue skies and a slightly warm breeze. Today was a good day. [Insert Ice Cube Meme here.]

Deployment day was supposed to be yesterday. My team and I kicked off the deployment on Wednesday. As we went through the external platform services controller (PSC) deployment, we made two conscious decisions: our SSO domain will be something that isn’t vsphere.local and we will use the NTP sources that we have set up in the environment to provide NTP. These things don’t seem like that big of a deal. NTP has been around for ages. It’s an essential service of any enterprise environment.In many cases, the SSO domain is a vanity domain that only exists in the vCenter environment. VMware’s guidance is just to ensure that it is not your LDAP/Active Directory domain name.

The first thing that we ran into was an error with the PSC deployment failing its firstboot scripts, complaining that DNS was not set up correctly. Spoiler alert: DNS for the PSC was set up properly. Upon further investigation, a team member stumbled across a blog post that pointed out that you should only use one NTP source. “Great, good, the technology just isn’t there to deploy with two NTP sources,” I said to the team as we all had a good laugh. We redeploy the PSC and all is well, or so we thought.

We ran the vCenter Server Appliance (VCSA) deployment wizard and blew through it with ease. We set it to large, gave it a name, punched in only one NTP source (as we assumed the VCSA also couldn’t handle more than one NTP source) and started the deploy. Like the PSC, it failed to run it’s firstboot scripts. Again, the VCSA was complaining about DNS. Again, DNS was not the issue. We tried three slightly different deployments thinking that there may be a gotcha in the deployment process that isn’t documented or an issue with deploying VCSA to a vCenter that is running 5.5 U3. Stumped and ready to leave for the day, my Canadian counterpart opened a ticket with VMware (which is still unresolved despite us getting it working this morning) and we called it a day.

Later that night, as I was checking my email to ensure that I didn’t need to Make Infrastructure Great Again before heading to bed (as I’m on my on-call rotation). I see a message from my Canadian counterpart. He found this fantastic blog post on why you shouldn’t change your SSO domain from the default, ‘vsphere.local.’ Posted almost one year ago, the article states “VMware Engineering are aware and will resolve this in a future release of vSphere 6.0.” I question this, as it still isn’t fixed. We also decided that we would change the NTP option from specifying NTP servers to the “use ESXi host’s time,” option. Our ESXi hosts are all set to use the same NTP sources, so it really didn’t seem to make that big of a change in deployment methodology.

We took all of this very valuable information and deployed a successful greenfield vCenter 6.0 environment this morning! The PSC and VCSA deployments all did what they were supposed to do in a very short amount of time. It’s nearly complete, as I think I only need to configure a handful of things tomorrow as I await a couple other components that are still in the provisioning cycle. Good times all around!

The morale of the story is this: don’t over complicate your VMware deployments unless absolutely necessary!