It's been almost one year since I started working on my first job after graduating with a Batchelors Degree on Computer Science.
My job requires me to work on E-commerce websites which use salesforce commerce cloud and I don't like using it , nor do I feel any desire to learn any sort of web development. Everyday I wrap up work feeling like I'm not cut out to be a developer… it feels like I'm stagnating.
Towards the end of my degree I was aware of the fact that my interest in fields like Machine Learning, Data Science, AI and software development were diminishing. I wanted something different, at that time Cybersecurity was the only field that really appealed to me, so I applied for a few jobs and none of them wanted freshers. Since money was tight, I had to find a job and I ended up becoming a web developer.
Right now I'm learning on the side for certifications like CompTIA Security+ (not necessarily for the certificates) in the hopes of landing a job in cybersec. I also have some Linux knowledge, but I doubt it is anywhere near the level required for a professional. I understand that cybersecurity is a broad field, so I'm still figuring out what job roles I should be looking at.
I don't know if I'm doing the right thing here, perhaps I should also consider jobs like devops too.
Any advice is appreciated.
This sounds interesting. I'm wondering if you could go into any more detail about what you were trying to do with your opening, and what needs you are seeing out there around storage specifically. I have a small software company and I've been under the impression that storage is pretty much taken care of at all levels by the existing commodity services, but maybe I'm just talking to the wrong people or missing something important. Thanks.
I'm referring to BIG storage, private clouds, data lakes, etc. For example, my primary customer, In three years we've grown the object storage footprint by 100 petabytes. The rest of the global footprint across 110 sites is another 95PB. Commodity services do not scale, and global data transmission is typically custom tailored to the user requirements. Thinks like a 1st pass at the edge in 15 remote test sites, each crunching 100TB of raw data down to 10TB for transmission back to core, and that process happens on a clock. Other binary distribution uses cases, transmitting 50GB jobs from other continents back to core for analysis. It's all still custom. Then there's all the API back end work, to build out all the customer accessible storage APIs, numerous challenges there.
I'm trying to wrap my head around this - I've been stuck in the mickey mouse line of business world where a company may have like a few TB of transactional data in a decade - and I kind of want out into the real world. A few questions if you don't mind, what kind of customer needs this amount of storage, what kind of data is it, and are you mostly building on top of S3?