I get IVIG 3x / week for a primary immune deficiency I’ve had since birth. For my dosage, it’s billed weekly at a rate of ~$32K of which my insurance company pays around ~$3K.
I hit my max out of pocket the first month of each year and the rest of the year is “free.”
I know that all the IVIG pharmaceutical companies offer a “I can’t afford this” option if you’re going out of pocket.
Good luck, it’s stressful to get but it’s a miracle blood product for those who need it.
Not an expert in C, but there's definitely a refreshing feel of purity about something old and "sharp", that is powerfully entrenched in the UNIX ecosystem but can also be dangerous and bite you.
Given how long some of these problems have been going on, it may be a chance to learn why it's not possible to fix them within their current organizational structure.
"Oh wow, so you have discovered that after someone buys something, and you keep recommending the same thing, a large number of people go out of their way to buy other stuff to remove the recommendation. So we actually make more money recommending the last thing that they would want to buy?"
Who says they can't see the URL? A sufficiently motivate government would probably be able to create forged certificates and mass interception isn't really out of the question. Especially with browsers homogenizing on fast ciphers AES-GCM/POLY-1305, I bet it's much more economical than you would think.
Cert Pinning or HPKP is one type of solution, but it's tricky to get right especially for a large site like wikipedia.
"In Turkey, Wikipedia articles about female genitals have been banned; Russia has censored articles about weed; in the UK, articles about German metal bands have been blocked..."
I find it impossible to replace a legacy system without first understanding the legacy system. It's easy to say "X is old" or "X sucks at Y" but keep in mind that applications have a strong survivor bias. Ask yourself, "what does X do that is awesome" or "how has X lived this long" and you'll find a lot of hidden scars and requirements.
Start documenting those requirements and create a plan to map those to technologies that you prefer. Understanding how to successfully migrate platforms is a learned skill, and it's going to become useful again when React is no longer en vogue.
Yes. Thats what the college-lecturer tells you. But in the real world you should not stay longer than necessary in a company which lied so hard into your face.
Always several sides to a story and during interviews everyone is on their best behavior. I have yet to be in an interview where I get the full and complete picture without any embellishments. It's only after you have been on the ground floor for a few months that you see things for what they are.
Also, OP's story is currently colored by how upset he is so I'm sure the interviewers made the job sound more appealing than necessary but I'm sure there is also some revisionist history happening.
It was a lie of omission but I'm not sure if that makes it any better. I was hired to work on a new project but it wasn't disclosed that the project hadn't been started yet and has already been delayed for years
To some extent hiring is always aspirational: a reason a company wants to add more people is to get better at skills it thinks it lacks.
This applies particularly to aspirational projects that a company "wants" to start. It hits a chicken-and-egg problem of that it needs to hire skills for that project, but can't start on that project until it hires those skills, many people don't want to be hired unless that project is already off the ground, and in the meantime existing projects still need to be maintained...
Certainly it would have been better for the interviewers to more accurately describe how aspirational their goals and not imply that things were more off the ground than they were. It's a bad way to start a relationship by selling goals as reality. It's also a sadly common way for companies to start relationships.
If you want to try to contribute change, figure out what the roadblocks are to that aspirational project. See if you can find ways to apply your skills to the aspirational project. Sometimes companies forget the bootstrap step in that chicken/egg problem and forget to check if they've added enough resources to start pitching into the new work. Goals continue to be delayed because the company is uncertain and being deliberately conservative about if it has the resources it needs to meet its goals.
Sometimes an aspirational project is looking for a leader to step up, someone with enough passion about the future to get the work started and get prototypes out the door. There's a possibility that can be you, if want to apply for that pressure/responsibility. There's a possibility that in hiring you your managers hope it might be you.
As much as anything, there's a chance here to introspect and figure out if it can be you. Figure out if you can make that pressure/responsibility work for you, if you can make the work/life balance you need, if you can find a way to balance work's existing responsibilities on you (help maintain current systems) with potential new responsibilities (help lead new project). In some companies you might be very well rewarded if you can strike that balance, if you can lead the company onward to meet its goals while helping it survive with its existing needs. It's up to you to assess if the rewards are worth the risks. Your company might be one of the very many that aren't that loyal to its employees and you would be better off elsewhere, but that's something you probably need to judge for yourself.
I guess I'm rambling, but there are ways to make your situation work, if you are looking for them. It's as much on you to discover if you have that capability as it is for the company to solve its own paths to its goals. Starting that conversation with a lie of omission might be an indication of bad faith and disloyalty from the company immediately off the bat... or it might be a sign from the company that it really wants someone/anyone, and that someone could be you, to step up to bat and try to knock something/anything over the plate. It's rare that a company wants a new hire to strike out. Maybe if you can hit a home run you might be rewarded for it, and it might be worth swinging for the fences. Have the conversations you need to figure out if it's worth the pain of swinging for the fences versus playing it safe and bunting until you get the next job offer.
I'm a new parent and I need to track when my son eats/poops/pees etc. There are times when my hands aren't available. I built a tiny "app" that just records these things to a google spreadsheet for me.
I also like the idea of "authenticating" via being present in my home. If my in laws are watching my son while I step out for a few minutes - they can talk to Alexa as easily as I can.
We use ML/Deep Learning for customer to product recommendations and product to product recommendations. For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.
So is this like the Amazon "feature" where I buy a coffee table on Amazon, then I get suggested to buy a coffee table EVERY DAY for 3 months. Literally row after row of coffee table? Because there must be a big pool of people who buy 1 coffee table buying more coffee tables immediately after?
It's a hard problem to determine the repeat purchase cadence of a product. At one end of the bell curve you have items re-purchased frequently, e.g. diapers or grocery, and on the other end you have items that are rarely repurchased.
I haven't looked at coffee tables specifically, but I know when I've looked at home products in the past I've been surprised at how frequently people will buy two large items, e.g. TVs or furniture, within a short period. That said, I agree there is room for improvement here. We're constantly running experiments to improve the customer experience, I have faith that in the limit things will improve. Again, we have no shortage of experimental power so if you'd like to join in the experimentation let me know :)
IMO it comes down to the fact that Amazon literally has my last 13 years of purchasing history, yet it seems that all they are doing is "you looked at x, lets show you y variations of that x."
My dream is that I go to Amazon.com and there are a ton of different unrelated products that people who purchase similar things as me buy. So if I only buy "buy it for life" kitchen equipment, it doesn't show me the most popular but crappy version of something, it shows me the one that I'd actually purchase.
Such an easy problem with suuuuuch a difficult solution though. Not to mention the obvious privacy concerns there.
Oh well, I know that they have good people working on the solution, and no chance I could do it better :p
This topic must be extremely interesting (good suggestions could increase sales by a LOT) and smart people must have been working on it for quite a while.
- What is the fundamental reason why this is a hard problem?
- What's up with the coffee tables specifically, could you, for the hell of it, look into that category and tell us what the actual related products are? Let us (fail to) guess how these products are related, but don't let us hanging :-)
must be the same genius technology that leads Amazon to load up my Prime frontpage with fashion accessories when I've never had any history of searching or buying such, and recommending the same shows "Mozart in the Jungle", "Transparent", "Catastrophe" on Fire TV stick for months even though I've never shown any interest in any of such programming, even after manually "improving recommendations" by clicking "Not Interested".
its amazing that the vaunted Amazon technology is unable to figure out an algorithm that would satisfy a user's deep desire "please stop plastering Jeffrey Tambor's lipstick and mascara covered face on my startup screen, I've gotten tired of looking at it for the past year"
Advertising is trained against ROI, not against what will "seem right" to the user.
Maybe in-market* furniture shoppers tend to spend a lot of money. Maybe furniture is a very profitable category. Even if the system is smart enough to assume there's only a 20% chance that you're in the process of significant furniture purchases, furniture ads may still be a better use of the ad slot than a lower value item where you have an 80% chance of being in-market.
Then why show the same damn coffee table over and over? Maybe that's more likely to return your attention to your furniture purchasing? I have no idea. Most likely, they don't know exactly either. Most likely, that's just what the highest-scoring current algorithm decided.
*The duration of "in-market" varies by category. Some product categories have a long consideration phase. For example car shoppers tend to spend 2-3 months considering alternative brands and models before they spend a few weeks narrowing down on a specific car configuration and exact pricing.
Haha yes, I remember seeing washing machines on my landing page for months after I bought one from Amazon. I mean, how many of them could a person need?
Seriously though, I don't understand why it's so hard to take this effect into account, as there should be a very strong negative correlation between a purchase in a given category and the probability of buying an article from that category in the near future, so even a simple ML algorithm should be able to pick this up easily. Anyone here who can explain why this is difficult?
The simple algorithm is to build a correlation matrix between of purchases between all items in the store. Then, when given an item to generate recommendations for, you provide the other items with the highest scores, with a "top sellers" correction for the items that are correlated with everything.
I used to work for a company that implemented similar recommendation services. We approached this problem by modelling whether or not a category was likely to have recurring purchases.
Obviously the goal of ML in this would be that feeding it enough data about users who buy coffee tables would eventually teach it that you probably don't want another coffee table (because who buys two coffee tables in a row?), but might want to buy say... end tables or other living room furniture in a matching style to the coffee table you just bought.
Would the standard models used allow for the fact that humans could, after buying a coffee table, choose to click on the coffee table in anticipation of then getting suggestions for similar furniture. Presumably the machine sees that the end goal of those continually clicking the same item is actually to arrive at similar items .. but wouldn't it be an obvious optimisation for Amazon to set the ML up to already look deeper than the first page reached.
I have a similar thing with Amazon, I don't know how you're supposed to access the bestseller list for a product type. I just know that if you search a product and follow related products that you eventually get a "#5 in ObscureProduct" tag and that tag takes you to the list of the top-10 models of ObscureProduct available. That sort of learnt navigation must play havoc with a suggestion algo (but IMO would be very easy to fix with just a link for any specific enough item to the 'top 10 in this category').
Theory is that the recommendation engine is built for books. So if you buy a specific type of book, it recommends other books in the similar category. I guess they never got a chance to update it to reflect the fact that Amazon sells more than just books.
I'm late - but that is actually called dynamic remarketing. You look at a certain category of item and then see ads (on amazon or off-site) for other items in that same category. If you actually bought the coffee table on a different device/browser/anywhere else.. then you'll see those ads for a while because they can't recognize that you actually made the purchase already.
It's more like you bought a coffee table and you get coffee beans in the recommendations. Also, you buddy who you are in the same group with gets a coffee table recommendation.
For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.
This is the right way to approach it. Too many people are looking for "deep" as some sort of silver bullet for an ill defined problem they have. If you can't validate against a simple model trained properly you are already in trouble. Likewise if you don't understand how to evaluate your generalization issues and how/if a simpler model will improve them.
https://www.tarsnap.com/spiped.html