The Hidden Cost of Verizon’s ‘Free’ Rewards Program: Your Data

Free rewards programs can actually cost you in terms of privacy.

With the announcement of Verizon Up, a new wireless rewards program that provides users with customer incentives, first-dibs opportunities on things like VIP tickets and other exclusive deals, we thought it was time to review how reward marketing plans work.

First, the good news: Verizon Up is free!

Like their intrusive cousin the loyalty program, reward-based marketing schemes usually require no additional fees. In essence, Verizon Up is a camouflaged version of what author Seth Godin calls “permission marketing.”

Now the bad news: Nothing is free. Verizon is making you pay with your personal information instead of money. But make no mistake: They’re going to profit more than you will from the arrangement. (Note: Verizon did not return our request for comment.)

Never were the words of the German philosopher Georg Wilhelm Friedrich Hegel more prescient: “To be free is nothing, to become free is everything.” Translation: In the world of big data, there’s no such thing as “free.” If a company offers you something for your data, you’re the product. They are monetizing your information.

The eligibility requirements on Verizon’s website make this clear. Opting in enables Verizon to personalize marketing sent your way by them, and by other companies, using your data.

What Data?

These days “your data” is pretty much anything marketing companies can get their hands on. If you belong to a gym, it may be selling that fact to a third party, and with it possibly more data about how often you go and anything you bought there to enhance your workout.

If you use a mobile phone, your data could include everywhere you have gone and most likely anything discussed via text. Whether or not you use the popular Waze app, there’s data on how fast you drive, which in the wrong (or right) hands could affect the rates you pay for car insurance — never mind the possibility that law enforcement could one day claim jurisdiction in the realm of cyberspace-clocked speeding tickets.

When it comes to your data, the goal is to create a granular portrait of you — your interests, likes, dislikes, passionate yearnings — all of it prepared and arranged for resale to companies and organizations hoping to match products and services with various aspects of your personality.

How Specific Does This Get?

The kind of information the big data companies have — what constitutes “your data” — depends on your privacy hygiene. The less you share, the fewer times you opt in, the more privacy you will enjoy.

Companies like to incentivize the sharing of personal data. Sometimes it’s by creating something fun, like a toy or gaming experience. The lure of social media is hard to resist but every like and comment becomes part of your sellable data.

If you’ve ever signed up for a loyalty program, everything you’ve purchased will be included under the heading of “your data,” providing a very specific window into your life, not just simple stuff like your gender and age — they already know that — but your health and habits based on what you buy. And of course, your credit card companies know more about you than almost anyone else — including, probably, you. (You can get an idea of what they see about you with a free credit report snapshot on

Nothing to See Here

Remember the story about the emperor’s new clothes? Basically, he didn’t have any. That’s the deal here. And while Verizon is not alone in perpetrating a consumer data grab, their recent announcement makes them today’s blue-plate special.

As is the way with this kind of offer, Verizon Up will provide users with some perks, but for what? And is it a fair swap?

To be clear, whenever the right to use your data, without limitation, is the ask, saying “yes” is never going to be the answer I recommend. It doesn’t matter what you’re getting for it. In this case, Verizon is asking to monetize the data on products and services that you use (and pay for) as well as far more personal stuff, “including location, web browsing and app usage.”

Does this mean your iPhone Safari browser can be set to “Private” and it doesn’t matter? Internet service providers can see any traffic that doesn’t move via virtual private network. So, is everywhere you go online still visible, able to be sold to a third party no matter how private?

It doesn’t matter. Get in the habit of saying no.

When it comes to privacy, you need to be your own advocate. As Toni Morrison said, “Nothing and nobody is obliged to save you but you.”

Image: serdar_yorulmaz

The post The Hidden Cost of Verizon’s ‘Free’ Rewards Program: Your Data appeared first on

You May Not Have to Remember Your Passwords Anymore If This Google Plan Takes Off


You may soon be able to log into your Android devices without entering a password, thanks to the new “trust scores” Google announced at its annual I/O conference last week.

The trust scores, also referred to as the Trust API (application program interface) or Project Abacus, started development last year and The Verge reports the security updates are expected to be rolled out to a test group of “several very large” financial institutions next month. If all goes according to plan, Google expects Android users to have access to the scores at the end of the year and then potentially other operating systems after that.

How it Works

The API runs in the background and utilizes user-specific factors to develop a user’s trust score each time they log on, which could help prevent unauthorized users from accessing your personal information. According to TechCrunch, the API will factor in personal indicators, like face shape and voice recognition, as well as behavioral data, like how you move and type to compute the score.

Eventually, The Verge reports, trust scores could play a factor in logging into any apps or programs on mobile devices. Qualifications would vary based on what you’re trying to do, like banking apps requiring a higher trust score than a social media app.

Replacing Passwords

Until this security measure is implemented on devices, all users will continue logging in with standard passwords. If you think you have a weak password on any of your accounts, or use the same one for multiple logins, you may want to consider changing them. In fact, it’s a good idea to change your passwords often for better internet safety.

If you believe one of your accounts or passwords has been compromised, especially if it’s something that can be tied back to your finances, checking your credit report can help you see if anything damaging has occurred. (You can get your free annual credit report from and view two of your credit scores for free, updated monthly, on If you see any signs that your identity has been stolen, like new accounts you didn’t open or addresses that aren’t yours, you should dispute the information with the credit bureaus and report the fraud to the proper authorities.

[Offer: If you need help fixing errors on your credit report, Lexington Law could help you meet your goals. Learn more about them here or call them at (844) 346-3296 for a free consultation.]

More on Managing Debt & Credit:

Image: Petar Chernaev

The post You May Not Have to Remember Your Passwords Anymore If This Google Plan Takes Off appeared first on

8 Ways to Protect Your Privacy Online

The only sure thing in the world of information security is that there is no such thing as a failsafe solution. It’s crucial not only to keep abreast of the latest threats out there, but to also act as though the mission is to find your way to safety from the middle of a lawless demilitarized zone that’s lousy with enemy snipers.

Sound extreme? Remember that Cold War classic “A Few Good Men” when Colonel Jessup (played by Jack Nicholson) tells Tom Cruise’s character “You have the luxury of not knowing what I know”? It applies here. I’m not saying I have all the answers, mind you. If anything I think the opposite. But I do know that I don’t know what’s going to happen next in the land of Data Insecurity, and that gives me a better chance of staying safe.

Still Not Worried?

Symantec just reported more than 500 million digital identities were stolen in 2015 while fake tech support scams increased by 200% and ransomware attacks increased exponentially as well. Hackers are getting better at their game—ensuring better results with better techniques and technology.

Want more? Dell SecureWorks annual report was recently released. The takeaway? Hackers are getting organized and entrepreneurial. Want to get access to a U.S.-based email account? It doesn’t matter if it is Yahoo, Google, or Hotmail, they’re all available for a pretty decent price: $129. (Note the market-appeal pricing!) According to the report, it costs a little more to get into a corporate account—understandably—and a little less to get into a Russian email account.

Also on the menu: access to Facebook and Twitter accounts — and for the same price as an email hack! There’s a panoply of services on offer out there — ranging from malware that aids snooping to doxxing — that opens up the possibility for all kinds of identity-related crimes.

So What Can You Do?

There is plenty you can do. The first thing is to change your life. I mean it. You have to completely change the way you approach your life as it intersects with things digital.

I’ve mapped out a way to do this in my book Swiped: How to Protect Yourself in a World Full of Scammers, Phishers, and Identity Thieves, which provides different discussions and strategies for specific situations ranging from identity-related tax fraud and medical identity theft to phishing and child identity theft.

But if you read nothing else on the topic, there are three simple things you should bear in mind, which I call the Three Ms.

  1. Minimize your exposure. Don’t authenticate yourself to anyone unless you are in control of the interaction. Don’t over-share on social media. Be a good steward of your passwords, safeguard any documents that can be used to hijack your identity and consider freezing your credit.
  1. Monitor your accounts. Check your credit report religiously, keep track of your credit score, and review major accounts daily if possible. (You can view two of your credit scores for free every month on If you prefer a more laid-back approach, sign up for free transaction alerts from financial services institutions and credit card companies, or purchase a sophisticated credit and identity monitoring program.
  1. Manage the damage. Make sure you get on top of any incursion into your identity quickly and/or enroll in a program where professionals help you navigate and resolve identity compromises — oftentimes available for free, or at minimal cost, through insurance companies, financial services institutions and HR departments.

Beyond the Three Ms, here are a few common-sense changes you can make to your daily digital life that will make you a moving target for identity thieves.

  1. Beware phishing. Never click on a link sent to you via text or email from a stranger. If you get a link from someone you know, first check if that person actually did send it, because they may not even know that they got hacked and have become a font of malware. Assume the worst!
  1. Be smart about passwords. Never use the same password for different accounts, and do not keep all your passwords saved behind a single password (like on your computer). Make your passwords complex, long and make sure they contain punctuation marks, numbers and other random symbols. 
  1. Use multiple-factor authentication. You may have received a notice recently from your email provider asking for a phone number that can be used to contact you in case your account is hacked — that’s multiple-factor authentication. If you are given this option, use it. Security is sacrificed on the altar of convenience way too often, and a little extra effort can make a huge difference in vulnerability.
  1. Consider encryption. It’s not as hard as you may think to start using a pretty good privacy-based encrypted mail system, and the upshot is that you will be much harder to hack.
  1. Tighten your privacy settings on social media accounts. Never post anything that will make it easier for a fraudster to guess things about you, because that could compromise any account that’s protected by security questions.

We all occupy a digital privacy landscape that is treacherous. It’s a no man’s land where criminals not just figuratively, but in actuality, hold sway over the good orderly direction of daily life. Whether you become a statistic may be out of your hands, but there are ways to improve your odds of staying safe, and it’s very much worth your time.

More on Identity Theft:

Image: moodboard

The post 8 Ways to Protect Your Privacy Online appeared first on

Privacy vs. Security: Where Should We Draw the Encrypted Line in the Sand?


The recent Game of Phones between the FBI and Apple underscored an area in our jurisprudence that is screaming for more clarity. If there is a tipping point when the protection of consumer privacy should yield to the needs of a criminal investigation, where is it?

Few will dispute the obvious cases where the Constitutional rights of a citizen are disrupted by a judge who knows (or at least has access to) the legal precedents informing the decision to suspend a citizen’s right to privacy. A court-ordered search warrant trumps those rights, for a defined period of time, and it can happen fairly quickly when a member of the judiciary believes there is good and sufficient reason for it. Sometimes, in instances involving probable cause and easily discernible physical evidence, the law permits on-the-spot access.

The latter scenario came into play with the phone belonging to San Bernardino shooter Syed Rizwan Farook, an iPhone 5C running iOS 9. Law enforcement officials had every reason to believe there could be time-sensitive information on the device—information that very well might save lives. They attempted to access that information through Farook’s iCloud account. But, in the process, they made a mistake. They reset the password remotely. When they did that, they cut off a way into the device, an auto-backup, which may have been possible had the phone been transported and connected to a Wi-Fi network that it recognized—in this case, the shooter’s home wireless network. There was only one way to find out if that would have worked, and it disintegrated when a law enforcement official reset that password.

Locked out, the government requested Apple’s help. Apple CEO Tim Cook refused to provide that help on the grounds it would compromise consumer privacy and set a dangerous precedent. The FBI secured a court order demanding Apple unlock Farook’s iPhone, and still the company refused to comply, which begged the question: Should the government be allowed special access to information that is protected by encryption or any other method designed to protect user privacy?

In October 2015, the Obama administration had decided it was not a good idea to legislatively force decryption at the behest of law enforcement. “The administration has decided not to seek a legislative remedy now, but it makes sense to continue the conversations with industry,” FBI director James B. Comey told the Homeland Security and Governmental Affairs Committee. Not long after that announcement, the San Bernardino shooting caused the Justice Department to do a 180—getting a court to order Apple to decrypt. The case made daily headlines. Numerous briefs were filed by all stripe of organization on both sides of the issue. Then the action became moot because—reportedly with the help of a third-party technology firm—the FBI wormed its way into the phone.

But on the other side of the FBI’s successful workaround with Farook’s iPhone 5C lies a legal shadowland. This pivotal question about consumer privacy still has not been addressed, because the FBI successfully breached the phone without Apple’s help.

What Now?

When it comes to encrypted devices, can there be special access afforded to the government, in only extreme cases, without weakening the privacy protections afforded by encryption to consumers?

Digital enterprise probably won (by a smidge) in the battle over access to Farook’s iPhone because Apple was not required to provide what could have amounted to a permanent backdoor to law enforcement. The FBI said this week that it would help local law enforcement agencies decrypt information on devices without saying that it would specifically make available to them the means used to crack the San Bernardino shooter’s phone. You can be sure that when Apple closes the door on the FBI’s exploit, there will be an announcement and the fight over law enforcement access to encrypted information will resume in earnest.

It is not breaking news in the information security community that the FBI has had a Tor exploit for a while now. Tor is an anonymizing network that allows people to visit websites without being traced. There are as many legitimate reasons to use it as there are illegal ones—among the latter category being the trafficking of child pornography, which was the reason the FBI developed the tracker malware used to locate and arrest people who transmit illegal images. What is not known: how many other presumed safe platforms have glass walls for law-enforcement eyes only?

I think it’s also worth wondering aloud if the FBI always knew there was a hack to get in Farook’s iPhone. Were that the case, the FBI motion in this case would have been less about finding a way into the phone and more about two-stepping around the Obama Administration’s previously stated position to continue conversations and not go to war with Silicon Valley over decryption legislation.

In February, Tim Cook explained to ABC World News Tonight that the FBI had essentially asked him to create “the software equivalent of cancer.” The tension between selling privacy and having it compromised by legal means is not an easy one to navigate, but in this war of words and ideology, we need to do a whole lot better than we have so far.

This story is an Op/Ed contribution to and does not necessarily represent the views of the company or its partners.

More on Identity Theft:

Image: stevanvicigor

The post Privacy vs. Security: Where Should We Draw the Encrypted Line in the Sand? appeared first on

Why Apple Is Right to Protect Your Privacy

protect your privacy

Steve Jobs understood what people want. His insistence on making hard things easier — for instance, using a personal computer — was an essential part of the Apple success story. Apple CEO Tim Cook has been doing the same thing — but now the “hard thing” is privacy and encryption.

Apple has consistently earned top marks for its privacy and data security policies. That said, since the San Bernardino shooting, which left 14 dead and 22 seriously injured, the company’s privacy-first approach has been experiencing a sort of baptism by fire.

Much debate has arisen around the encryption on San Bernardino shooter Syed Rizwan Farook’s iPhone 5C. Shortly after the shooting, the iCloud password associated with Farook’s phone was reset by a law enforcement officer attempting to gather information.

The snafu purportedly eliminated the opportunity for any information on the phone to auto backup onto the cloud when the device was used on a recognized Wi-Fi network. This information could have then been retrieved.

According to ABC News, the last time Farook’s phone had been backed up was Oct. 19, 2015 — a month and a half before the attack. According to court documents, this fact suggested, “Farook may have disabled the automatic iCloud backup function to hide evidence.”

Apple provided the FBI with the iCloud backups prior to Oct. 19. But the government wanted access to the phone, at least partially to discern if Farook had any terrorist ties. And, to get to it, the FBI asked Apple to reverse a feature that erases an iPhone’s data after 10 failed attempts to unlock it. If Apple did so, the government could use software to guess Farook’s passcode.

The FBI argued its reset of Farook’s password should not prevent Apple from honoring this request.

“It is unknown whether an additional iCloud backup of the phone after that date — if one had been technically possible — would have yielded any data,” the agency said in a statement. “Direct data extraction from an iOS device often provides more data than an iCloud backup contains.”

And, last week, a federal court ordered Apple to develop a custom iOS so the FBI could gain access to the phone. Apple is refusing to comply with the court order.

“Building a version of iOS that bypasses security in this way would undeniably create a backdoor,” CEO Tim Cook said in an open letter to Apple customers. “And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.”

What’s at Stake

Consumer awareness around privacy and encryption has gained traction, following Edward Snowden’s revelations regarding the scope of government surveillance practices at the National Security Agency. Still, the public’s response to Apple’s current plight remains divided.

While some pundits, commentators and high-profile figures have argued the FBI should be able to access phone records in cases where national security may be at risk, others have come to Cook’s defense, arguing he is right to protect Apple customers. I, too, believe he is right to stand his ground here. In an environment where many companies would allow law enforcement to access private information, Apple is standing up for consumers and suggesting they can no longer tolerate routine incursions into their private lives — whether the so-called trespassers hail from the halls of government or invade in the interest of commerce.

To create an iOS or any other kind of backdoor into a personal device creates moral hazard. The potato chip theory applies to law enforcement and the erosion of the constitutional rights guaranteed to all U.S. citizens. One potato chip leads to another, and it’s hard to stop eating them. In the same way, one legal mulligan leads to another.

There has to be a point in the evolution of consumer privacy (or its disintegration) where we can no longer lower our standards as fast as our situation is deteriorating. When it comes to our privacy we really have to stand firm — and Tim Cook is doing that.

Executive Director of the Privacy and Big Data Institute at Ryerson University Ann Cavoukian long ago coined the phrase “Privacy by Design” to describe what’s starting to happen in the U.S. marketplace. Her theory was that consumers will start shopping for the best deals on their privacy — the less personal information required by a potential service or product, the more appealing it will be to the consumer.

So in that regard, the Justice Department is right to suggest, as it did last week that Apple is trying to protect its “public brand marketing strategy.” But in this instance, the strategy is consumer advocacy — nothing more or less. Privacy is not a brand. It is a right. And, contrary to popular belief, it’s no longer particularly hard, either. Apple’s strategy is to provide a useable product that is safe — and protects users against a potential war on their privacy.

This story is an Op/Ed contribution to and does not necessarily represent the views of the company or its partners.

More on Identity Theft:

Image: Wavebreak Media

The post Why Apple Is Right to Protect Your Privacy appeared first on