Definitions of what constitutes ‘data science’ are apt to differ according to the source you consult. IFoA President John Taylor has described the discipline as “a dynamic field that as soon as you try and pin down what it is today, it’s changed tomorrow”. This being the case, how does this apparently elusive definition reflect the actual job of data scientists? What goes into their daily roles and responsibilities, and how do data scientists differ from organisation to organisation?
First-hand accounts are the best way to gain insight into what any professional remit entails, and so the IFoA invited three data scientists who work in contrasting sectors to outline the basics of a typical working week. Their accounts reveal that, as well as being core members of multidisciplinary teams, data scientists have to be multidisciplinary individuals, adept at a range of data analysis, software and business skills.
Gousto: Irene Iriarte Carretero
Irene Iriarte Carretero is a data scientist at the UK’s largest recipe box company, Gousto*. Her work has focused on the development and implementation of different data science products, such as a menu recommendation engine and a forecasting algorithm that allows Gousto to predict the recipes that customers will order, and to ensure that the company minimises the food waste in its supply chain.
AM: I’m working from home today. It’s a great way to start the week, as I’m really able to focus on tasks that require more concentration. After a quick call with the team to finalise what we are working on this sprint, I do some research on how top companies are applying personalisation. There are some really interesting blog posts and papers – I summarise my thoughts so that I can share them later in the week.
PM: I work on doing some analysis to understand how our customers are interacting with their recommendations – we are working on the implementation of a new collaborative filtering method, and these findings will feed into how we end up implementing the algorithm.
AM: My calendar looks pretty clear today – I have time to focus on starting the implementation of the new algorithm. Luckily, the data we need is straightforward to obtain and is already pretty clean, so I can get straight to using it. Given that, as data scientists, we take ownership of the entire process, from ideation of the product to deployment, I need to ensure that my code is production-ready. I send my code to one of the Machine Learning Engineers in the team who gives me some suggestions to make the code more efficient. Once I make these changes, I check that everything is working as expected in our testing environments before pushing it to production.
AM-PM: A bit of a change of pace today. We are spending the day at an offsite location where we are going to have several workshops to brainstorm on the long-term vision for our menu page. Our team is cross-functional and includes members from the Design, Food, Software Engineering and Proposition teams, as well as Data Science, which ensures that we think about our ideas from different perspectives. As data scientists, I think it’s easy to get too caught up on improving a model’s accuracy, so I always find it useful to have opportunities like this where we can really think about how our products slot into the wider picture.
AM-PM: It’s really important that we understand the impact of our products, so today I am focusing on analysing the results of an experiment we ran on the website, in which 50 per cent of our customers saw a slightly different experience of how recommendations were presented to them. We make sure that, as well as internal algorithm metrics, we deep-dive into more commercial metrics that we can easily communicate with stakeholders across the business, and that are more useful to understand how our products impact our customers.
AM: After a much-needed coffee (it’s been a long week!), I have one-to-one meetings with the team and catch up with some outstanding emails.
PM: We have a team retrospective to discuss what has worked well and what improvements we can make to ensure we are working more effectively. Finally, I work on preparing a short presentation for our monthly Tech Showcase – it’s a great opportunity to share our work with the whole company over some drinks and nibbles.
More from our data science series:
- Actuaries ask 'what is data science?'
- What are the career opportunities in data science for actuaries
*Based on volumes and revenues, based on third party data from Reward Insight.
Centre For Environmental Data Analysis: Graham Parton
As a Senior Environmental Data Scientist at the Centre for Environmental Data Analysis (CEDA), Graham Parton curates observational data from its atmospheric community, and ensures they are accessible and future-proofed for CEDA’s users. His role also scopes development of the content and structure of CEDA’s data cataloguing service, a data discovery tool and link to supporting materials. CEDA services are provided on behalf of the Natural Environment Research Council via the National Centre for Atmospheric Science and the National Centre for Earth Observation. CEDA is based within the RAL Space department of the Science and Technology Facilities Council.
AM: Start the week with a quick catch-up on the helpdesk where I can see our regular batch of users struggling to use the surface weather observations data from the National Met Agency. It’s brilliant data – it’s just that they need to cross reference the station metadata with the data itself before they can get on with their research. The open version of the data collection tool resolves most of those usability issues, which is helping new users to access these data… I’ll remind our Met Agency partner about this when we catch up about the upcoming new release of those data, as he’ll be pleased to see the rewards of his efforts there! The rest of the day I’ll get on with coding to improve our catalogue service. There are a few niggly bugs I reckon I can fix this afternoon.
AM: Focused on weekly ’Data Management Plan’ (DMP) related tasks. Checking our internal DMP tool, I see I’ve got a couple of new projects that have come in from the latest Research Council funding round for me to get in touch with. Will look at their project details and their outline DMPs to figure out what data they may want to archive, and then make contact later; but, for now, I’ve got some new sample files from another project to look at. Hopefully, these will be an improvement on the last ones they sent, where the internal metadata was, well, a bit sparse to say the least!
Thankfully, though, I can refer them to our help documentation to steer them in the right direction to resolve those issues. Just hope they can find their notes about the instrument’s deployment last year – without the instrument calibration details the data’s reusability could be questionable
PM: still not managed to get to look over those new projects that came in yesterday, as one of my ingest scripts encountered issues with the storage system; so I’ve had to spend most of the morning checking over the issues and getting the ingest restarted. But my diary is clear later this afternoon, so I should be able to fire off those introductory emails at last.
AM: Day of meetings. For starters, our developer group catch-up – usually tough for me, as I’m not a seasoned code developer, so some of the stuff is a bit difficult to follow. After that a Google Hangout to catch up with my line manager to review the outstanding developer task lists for the data catalogue. Hopefully with our new archive access database in place later this month I’ll get the go-ahead to develop the catalogue service: this will ensure up-to-date access and licence information can be fed through from our new tool. Then we can finally stop having to record this information in more than just one place.
After that I’ve a 16:30 meet with a colleague to see if she can help review our 70+ data licences to classify them with our new scheme. That would really help users to filter our 6,500+ datasets to find ones which they can use for work purposes (e.g., commercial or personal use), and not just assume that it’s just for academic use. This classification stuff is quite exciting, though, as it’s a new approach that is getting lots of interest across the international research data community, and not just for environmental data either! If it gets adopted more widely users will be able to do something akin to Google image searches, which allows searches to be limited by permitted uses.
AM: Monthly group meeting. Hear about the wide range of work that our group is engaged with, from working our cloud portals and development of our high-performance data analysis system, through to involvement with international metadata standards.
PM: Crack on with data management tasks. Before that, though, I’ll spend a bit of time checking our Elasticsearch index of our entire 200 million files to check for occurrences of some site names used in filenames to aid a project’s file naming scheme. It would be like looking for the tiniest of needles in the mother of all haystacks if it wasn’t for our Big Data tools to scan and index everything – but that’s what’s needed these days to manage such vast and diverse archives.
AM-PM: Today was pretty full-on. One minute I was setting-up new data extractions to pull in forecast model data into the archive (a quick task, but needs regular checks, as not all extractions run smoothly), the next covering the helpdesk, aiding users to find relevant data and sorting out their account issues. Then a brief Google Hangout with a developer to check on the adjustments to the new access control system, before doing a handful of data catalogue record reviews and DOI (Digital Object Identifier) minting, and getting a new dataset finally published after the last few weeks spent persuading the provider to actually follow the metadata guidelines. Thankfully I’ve had a quiet last hour of this week to catch up on my colleague’s blog post about data citation. It’s important stuff, essential for researchers to follow – otherwise, how can we rely on the science if we can’t find the underlying data that supports the results?
More from our data science series:
- Q&A with IFoA President John Taylor
- Q&A with IFoA Member Lisa Balboa
- Q&A with IFoA Member Patrick Lee
Bloom & Wild: Dave Marshall
Dave Marshall has been Lead Data Scientist at online florist Bloom & Wild for more than three years. The focus of his work is to use data to drive decision making in the company, and automate this wherever possible. This feeds into the main company objective of maximising the lifetime value of customers.
Make coffee first thing, to get the day going, and then look into an experimental project we are running with a new product recommendation system, which personalises the search for our customers to recommend the perfect bouquet for them. I discuss potential improvements on the experiment with my data analysis colleague, and whether to rollout to all customers, with another colleague, Bloom & Wild’s Retention Product Manager. We decide to keep A/B testing, where 50 per cent of our customers see the recommended products and the other 50 per cent do not, so that we can quantify data.
I catch up with my data analysis colleague on our priorities for the week. He’s working on an exciting project to improve our product metadata – the properties of the bouquets and plants that we sell. This will allow us to automatically calculate a similarity score for different items which could feed future improvements for our product recommendations. He has worked closely with backend developers to get data in place. We agree the project is nearly complete, and that he’ll present the work at next week’s company-wide meeting. We think we’ll get lots of ideas for other areas of the business where this new dataset can also be implemented.
We have a weekly company-wide meeting at midday. In 20 minutes we hear how we are performing against our key metrics, and also get an update on any career opportunities currently open across the company. It’s traditional that the presenter sprinkles-in fun facts on a theme. After this we head-off for lunch.
Busy day. Kicked-off a project to improve our website speed with two frontend developers and our VP of Product. A faster loading and more responsive web shop should make customers (even) happier and purchases more likely. My role is to crunch site performance data from our UK & Ireland, German and French websites: this is to identify where we’ve negatively impacted the experience, or if certain pages suffer on specific web browsers. I have also created a dashboard for developers to check they are making progress!
AM: headphones on, and time to work on my own for a bit. I write some code to help our operations team better understand our stock position at any given time, and to predict what bouquets will be left unsold at the end of the day. Dealing in perishable goods means that we need to forecast very accurately to avoid wastage. Meanwhile, this week it’s Bloom & Wild’s 6th anniversary, so at 18:00 the whole company heads off to eat some birthday cake and to do a flower arranging workshop!
Filter or search events
This year's GIRO has been re-designed as a virtual conference to offer members and non-members the opportunity to get up to date content from leading experts in the general insurance field via online webinars. All sessions will be recorded and made available to purchase and re-watch post-event on the IFoA's GI Online Learning Resource area.
This webinar will provide an update on the emerging thinking around future regulation of DB schemes:
The webinar will discuss the challenges and opportunities schemes face in evaluating end game options, choosing a target state and understanding the impact this strategic decision could have on member outcomes long after the “end state” is reached. Adolfo, Kevin and Rhian bring over 60 years of experience in the industry and a variety of perspectives as scheme actuary, covenant adviser, trustee, de-risking adviser and insurer.
Retail banking is going through a period of substantial change as it moves into the digital age. Banks have large amounts of data about their customers and about their risks. Open data application programming interface (APIs) and data science are enabling banks to use their data to offer innovative and sometimes personalised services. Data science is also adding value in risk areas such as fraud detection and cyber security. At the same time, the move to online banking is making it easier for firms including fintechs to enter banking without having to establish branch networks.
Cash-flow driven investing is a game-changer for DB pension funds navigating their end-game. Suitable for sponsors who want to reduce risks on their balance sheets. And for trustees, it shifts the focus to providing greater certainty of returns, managing funding level volatility and ensuring they have enough income to pay cash-flow requirements.
Patrick Kennedy, Partner at Gateley Legal and Founding Director of Entrust (a leading professional pensions trustee company), will be delivering an update on the latest legal developments during the course of 2020. With both a pensions legal perspective and over 25 years of trustee service, Patrick will seek to highlight how the letter of the law has continued to evolve against the backdrop of a difficult and challenging year
The talk will provide an understanding of the priorities and relationships between deficit reduction contributions, in the context of wider scheme funding, and different types of value outflow from the employer based on the working party’s recently published report.
Covid-19 has required an urgent and cross-practice initiative to facilitate the extensive impact this pandemic has across all industries. IFoA members have been keen to contribute in a different way, so we developed the IFoA Covid-19 Action Taskforce [ICAT] to coordinate our effort, with a more efficient governance.
We have over 500 volunteers and countless topics which we have amalgamated into 93 workstreams.