When I left Cisco, I knew that I would need to build a lab of my own. I used my lab extensively in TAC and knew I would benefit having my own place to test new features, customer questions, and help colleagues. In addition, owning the hardware would help me better understand the server (hardware and VMware) side of things as well.

In December 2017 I finished my initial build and set up a full UCCE deployment with bells and whistles. Thanks to virtualization, I did not need to utilize the 2951 sitting in my office and was able to use Cisco vCUBE and vCUSP for call routing. For UCCE work, I got a full UCCE system with most of the peripherals up including CCMP and ECE because I wanted to get a better feeling of these products before I needed to deploy them for work.

As I started enabling and using more features for UCCE and add-ons, I started hitting some performance issues. The two that made me really want to spend a few more dollars were CUIC and CCMP. CUIC eats RAM. I could have put 32GB of RAM on the VM for CUIC and it would find a way to need more. Doing some more digging, I found that VMware was reserving ~2GB so I was actually oversubscribed quite a bit. I had a feeling about this all along but thought sluggishness was something I could live with for a while, but it finally got hit breaking point.

I got fed up so I set up an alert on PC Part Picker and finally pulled the trigger. Since I was taking the server apart, I decided to add another SSD for additional VMs coming soon. The upgrade brought me from 500TB SSD and 64GB RAM to 1.5TB SSD and 128 GB of RAM.

Pre-upgrade utilization:

 Note: CUIC is powered down in this image.

 

Final build:
    CPU: Intel - Xeon E5-2620 V4 2.1GHz 8-Core Processor
    CPU Cooler: Noctua - NH-U9DXi4 37.8 CFM CPU Cooler
    Motherboard: ASRock - X99 Extreme4 ATX LGA2011-3 Motherboard 
    Memory: Corsair - Vengeance LPX 32GB (2 x 16GB) DDR4-2400 Memory
    Memory: Corsair - Vengeance LPX 32GB (2 x 16GB) DDR4-2400 Memory
    Memory: Corsair - Vengeance LPX 32GB (2 x 16GB) DDR4-2400 Memory
    Memory: Corsair - Vengeance LPX 32GB (2 x 16GB) DDR4-2400 Memory
    Storage: Samsung - 850 EVO-Series 500GB 2.5" Solid State Drive
    Storage: Samsung - 860 Evo 1TB 2.5" Solid State Drive
    Storage: Western Digital - Red 3TB 3.5" 5400RPM Internal Hard Drive
    Video Card: Gigabyte - GeForce 210 1GB Video Card
    Case: Cooler Master - N400 ATX Mid Tower Case
    Power Supply: EVGA - SuperNOVA G2 550W 80+ Gold Certified Fully-Modular ATX Power Supply
    Keyboard: Logitech - K400 Plus Wireless Mini Keyboard w/Touchpad

Note: The case is a bit bent because my standing desk got caught on the case but there was no functional impact.


Final utilization and VM list of my lab:



This was my first PC build. I have upgraded parts in the past but never really built anything from scratch. Shout out to Chris for his help during the build.

Before I left Cisco, I was doing a rotation with an Advanced Services project to get a better understanding of the delivery aspect of the Contact Center ecosystem. In one of the customer calls, they mentioned the use of Okta as their Identity Provider and the need to use SSO for their UCCE environment. Immediately the TAC engineer thought "this is unsupported" based on the compatibility matrix and what my interactions had been but it got me thinking to what would be require especially since the 11.6 release had big claim that it was protocol based implementation.

Being someone who lives by the lab, I got to work.

I immediately spun up a dev account on Okta and some IdS servers, rooted boxes, tested configurations, and looked through plenty of logs.

Tl;dr - UCCE integration with Okta. CCEAdmin, Finesse, CUIC, and even ECE works, CCMP does not. UCCX uses the same backend, so it also works.

Contact Center SSO with Okta Identity on cisco.com

This was my last article to be written and published on Cisco.com but one of the most fun and interesting problems I had a chance to work on the side with. One of the biggest challenges was my Cisco lab had some interesting proxy settings that required some fine tuning and showing some BU folk that had attempted this but failed that this would actually work.

 

Note: This was a proof of concept article. This was tested for functionality and failover with a few agents but was not thoroughly vetted in pre-production or production environments.

Last week Wink released an interesting article on their blog regarding home automation and smart homes. It reads off more as a marketing tool than statistics but still some interesting numbers.

  • Almost half (48%) purchased connected product to save energy followed by 44% to keep home safe

  • 57% of people have forgotten to do a routine household task in last 6 months.
    -51% turning off light
    -29% locking main door
    -24% closing garage door

  • People have a desire for monitoring
    -63% make sure home isn't broken into
    -38% to check in on pet
  • 36% of Renters would pay more to rent a home with smart products as amenities.
  • Renters would pay on average of 5% more in rent for smart homes
  • 63% of millennial said they've forgotten to lock front door in last 6 months.
  • 34% of Americans believe it would cost $5000 to turn home into smart home
  • 9% of Americans believe it would cost $20000 to turn home into smart home
  • Wink says most users start out with 4 products with an average cost of 200 bucks.

This is fascinating. The price points that some people think it takes to get into automation is crazy.

I started off with Hue starter kit and my home came with Ecobee3 system. Once I was gifted my Echo I felt like I had a proper smart home because I could use my voice instead of apps and the home finally felt "smart" at that point. Obviously my home system has grown but it's still nowhere near the $5000 mark but I also don't think $200 will get you a decent experience.

My recommendation for folks getting into smart home:

  • Basic
    • Google Home or Amazon Echo (I prefer the Home)
    • Hue Light starter kit (3 lights + hub)
  • Once you get the hang of it
    • More Google Homes or Amazon Dots/Echos
    • More bulbs
    • SmartThings Hub
    • Smart Outlets (TP Link makes a great one for 25-30 bucks)
    • GE Smart Light Switches
    • GE Fan Controllers
    • Smart Doorlocks
    • Video Doorbells
    • Garage door openers
    • Much much more.

Note that once you get into the home automation grind, you tend to start buying more interesting gadgets. It is an addiction that hamper the pocketbook but that the wife can benefit from too at the end of the day.

Link to blog post - Blog
Link to full data - PDF with more stats

With Home Automation and Internet of Things (IoT), there is always a concern over security. As a user of Ring Pro, a thread on reddit's /r/homeautomation caught my attention when a user found that the Ring Pro was sending packets to China.

/u/sp0di posted:

So recently installed a ring doorbell and found some interesting network traffic.
At random intervals, it seems to be sending a UDP/1 packet to 106.13.0.0 (China). All other traffic goes to AWS.
Anyone have any thoughts to iot devices calling back to China

This caught my attention so I sent a tweet out to Ring with a link to the thread and never had a chance to look back. After doing so, I was somewhat alarmed at what their VP of security had to say:

/u/matt-ring:

Hi I'm the VP of Security at Ring and I thought it might be helpful to give you all some background on what you are seeing.

Occasionally at the end of live call or motion, we will lose connectivity. Rather than abandoning the entire call, we send the last few audio packets that are corrupted anyway to a non-routable address on a protocol no one uses. The right way to do that is to use a virtual interface or the loopback to discard the packets. The choice to send it to somewhere across the world and let the ISP deal with blocking is a poor design choice that the teams on working on addressing ASAP.

From a risk/disclosure perspective, it's relatively benign but like the everyone else, when my team first saw it in the wild we had similar concerns.

i will circle back when we have updated firmware.

-Matt

I'm not a security expert, but I smelled some nonsense being spewed. Luckily, /u/33653337357_8 also felt the same and went through the post and laid out some concerns.

This is ridiculous. You are trolling, right? Let's pretend you were even going to do this ridiculous technical implementation and you didn't have an explicit loopback. Why can't you just drop? Why would you pick some random address (not even RFC1918)? Why not just send it to the IP address of the Ring device itself? Or how about the default gateway? Why not 127.0.0.1 and maybe it makes it out to be blocked by an egress filter but at least it doesn't get to a routable public network.

The state of IoT security is already poor - and this is is what Ring does to deal with "end of call" packets? Come on.

**Later edit**:

Sorry Matt, but I am going to have to pull your response apart a bit more here.

This is what the traffic looks like (from /u/sp0di):
    10:06:12.263764 6c:0b:84:f9:df:fc > 90:6c:ac:84:51:9e, ethertype IPv4 (0x0800), length 214: (tos 0x0, ttl 64, id 6080, offset 0, flags [DF], proto UDP (17), length 200)
    10.23.1.125.51506 > 106.13.0.0.1: [udp sum ok] UDP, length 172

    13:10:22.224408 6c:0b:84:f9:df:fc > 90:6c:ac:84:51:9e, ethertype IPv4 (0x0800), length 214: (tos 0x0, ttl 64, id 5547, offset 0, flags [DF], proto UDP (17), length 200)
    10.23.1.125.51506 > 106.13.0.0.1: [udp sum ok] UDP, length 172

You state....
Occasionally at the end of live call or motion, we will lose connectivity. Rather than abandoning the entire call, we send the last few audio packets that are corrupted anyway to a non-routable address on a protocol no one uses.

This is not a non-routable address (106.13.0.0). This is 106.12.0.0/15 owned by Baidu.
    % Information related to '106.12.0.0 - 106.13.255.255'
    inetnum:         106.12.0.0 - 106.13.255.255
    netname:        Baidu
    descr:             Beijing Baidu Netcom Science and Technology Co., Ltd.
    descr:             Baidu Plaza, No.10, Shangdi 10th street,
    descr:             Haidian District Beijing,100080
 
UDP is a protocol no one uses? Do you mean port 1 (tcpmux)? What exactly happened to your end point (the other host) and why aren't packets just continuing to be sent there, even if they are disregarded on that side?

"we send the last few audio packets that are corrupted anyway to a non-routable address on a protocol no one uses"
and
"The choice to send it to somewhere across the world and let the ISP deal with blocking is a poor design choice"
are mutually exclusive statements.

How does a non-routable address make "somewhere across the world" so an "ISP [can] deal with blocking"?

**Edit #2**

It has now been confirmed by two users that Ring is using a fixed source port, destination, and destination port. This means that Ring is effectively poking a UDP NAT hole that would allow return traffic to traverse the NAT gateway and reach the Ring.

Protocol: UDP
Static source port: 51506
Static destination: 106.13.0.0
Static destination port: 1

In a very theoretical scenario, let's say this transmits periodically (which it does), then this would keep open a NAT translation on your edge router and many common NAT devices will use the same OUTSIDE source port if it isn't already in in use for translation.

Traffic sourced from 106.13.0.0:1 and destined for yourip:51506 would reach the Ring device. Let's now pretend the Ring has a backdoored firmware that is simply waiting for a UDP packet to show up and provide an IP for the next command and control channel. In theory, it would only require 2^32 packets to hit every host on the Internet. You can now simply spray every host with one packet and wait to see who shows up.

I'm going to assume this isn't a backdoored firmware, but it very easily could be and the attack vector looks plausible.

**Matt, I think you need to provide a little more information. This isn't adding up.**

 

What's concerning is that a VP of Security said that throwing packets onto the internet and hoping they get dropped is better than locally disposing of them on a null interface or with a static route on the device itself and that leaving a port open on your egress gateway is not a security concern.

The above is very concerning especially with the Vault7 leaks just a few days ago. Now I will try not to attribute malice to what could be an oversight but I would hope that someone at Ring would give a hoot about securing their devices and properly routing data themselves. Hopefully they fix this and take a look at other security concerns that may be brought up.

In the mean time, I am trashing all packets for 106.13.0.0:1 via an ACL and plan on taking some pcaps to verify that nothing is making it to the outside.

Full reddit thread

EDIT: To follow up on this topic, I ran around 15 different calls through my Ring Pro with motion activation and calling using a live stream but did not see any packet destined for 106.13.0.0. There may be a specific trigger for this behavior that is not easily reproducible but a few folks on reddit were able to reproduce this but am reaching out to Ring to see if this was fixed or if there is a plan to address this.

EDIT 2: I reached out to Ring for comment and received the following response.

Hello Pavan,

Thank you for contacting Ring Community Support. My name is Genero.

Our development team is fully aware and is hard at work in finding a solution to the packet routing issue you mentioned above. We expect the firmware to be released by the end of this month. We do not have an exact release date but will have more information on this subject matter as we draw towards the end of this month.

We appreciate your patients and for being a valued neighbor in the Ring Community. If you have any other questions or concerns please let us know.

Best,

Ring Community Support

At least that's a start.