CSS is a real language, and you need deep technical knowledge to understand it. But plenty of software developers hate it and look down on it. It’s a good, if incomplete, tool for what it does. But I think it scares some of the gatekeepers who were drawn to software before the web.
It can’t be unit tested. It’s a language that only exists in a domain that stretches multiple sizes, multiple devices and multiple renderers. There’s more than 1 way to do things. And some of the biggest challenges with CSS are human. It’s the paintbrush for the bike shed.
Shopping cart (up to the point of collecting payments)
Bouncing balls
Don’t worry about building something unique. If you’re building something with lots of examples, you’ll have something to refer to when you get stuck.
Find something you know. Something where you can write down 5 requirements now without research because it’s something you use or an itch you have. And then you can work back from those requirements to the features you need to build. That’s the heart of programming. Not writing for the computer, but translating from human to machine, breaking things down and formalising them into simple rules.
And that’s when you realise programming usually doesn’t involve maths. It’s about thinking logically. What are the mechanics of the feature?
It’s not: I want list of tasks. It’s:
When I open my tasks, then I can add a new one or mark an existing one as complete.
When I type in a text box and hit Enter, then a new task is added to the list.
When I click the checkbox next to the task, then the task is removed from the list.
There’s an action and a reaction that the machine understands. There’s choices of actions that the user can make.
Use what you learned in the tutorial to translate those actions into simple functions, and to translate the choices into a user interface, whether web, native, command line or API. Then look at it, and make it easier, faster, more secure, or add more features.
The goal here isn’t to learn a specific language, although it will help you do that, it’s to think about how to take an idea, or a requirement, and translate it into something the computer will understand. I think this is the hardest part of the journey, but it’s the most important. I’d also recommend trying programming challenges such as Advent of Code or Project Euler to get practice of writing and thinking.
Good luck on the next, biggest, step of your programming journey.
Congratulations on your first day on your first software job.
Like many of your peers, you’re starting on support. Because reading other people’s code gives you a great feel for what’s nice to work with and what isn’t, and dealing with customers helps you understand what’s important and what isn’t. You will write better software knowing both these things.
But as this is day 1, we’re not going to expect you to rush in and fix everything. Take your time to look around and understand things. Your first bug fix is as much about learning the process as it is about fixing the problem. Because the process exists to help make sure this bug is fixed now and forever, and that this fix doesn’t break something else.
You wouldn’t fix a smoke alarm by removing the batteries, although that will disable the alert. Let’s find the root cause and fix that instead.
From experience:
Treat it like science : have a hypothesis and test it. Don’t just randomly change things.
Make notes on every hypothesis, test and discovery. One of them will be important but you likely won’t realise it at first.
From experience:
1. Treat it like science : have a hypothesis and test it. Don't just randomly change things. 2. Make notes on every hypothesis, test and discovery. One of them will be important but you likely won't realise it at first.
And once you’ve fixed it, do self-retro. What went well, what didn’t go so well, what do you wish you knew, what are you going to research to prepare for next time, what are you going to publicise about this fix for the next person to sit in your seat?
Well done on fixing this bug. There’ll be another along in a minute.
How will you write your next feature to make this easier next time? How will you write it so the next time is less frequent? How will you pay things forward to help the next developer, who may well be your future self?
The asychronous pattern in C# is great for simplifying the boilerplate of dealing with threads. It makes things look simple, but as with all abstractions, there are leaks.
In most cases, when an async method calls another and there’s no chain (e.g. public method calling a private method directly) return Task is fine, and avoids the overhead of the await state machine for that method.
BUT never do inside a using, or other scoped construct where the object required by the inner call requires context to successfully complete.
In the following example, the first variation, without the await, returns a database call that escapes from the context of the database connection at the end of the method, so when the call is made, the database connection has been closed. You need the await in the second example to postpone the disposal of the database connection until the GetUserAsync call returns.
In this example it’s fairly obvious, but the same pattern can cause threads to unravel far from where this fray was introduced.
class UserDetailsRepository
{
public UserDetailsRepository(IDbConnectionFactory dbConnectionFactory)
{
_dbConnectionFactory = dbConnectionFactory;
}
public Task<UserDetailsResult> GetUserDetails(string userId, CancellationToken cancellationToken = default)
{
using (var dbConnection = _dbConnectionFactory.CreateConnection())
{
return GetUserAsync(userId, dbConnection, cancellationToken);
}
}
public async Task<UserDetailsResult> GetUserDetailsAsync(string userId, CancellationToken cancellationToken = default)
{
using (var dbConnection = _dbConnectionFactory.CreateConnection())
{
return await GetUserAsync(userId, dbConnection, cancellationToken);
}
}
}
🚫 If the user doesn’t currently exist, return 404 – this informs the caller that nothing can be done with this resource until it is created or recreated.
🙈 If the user exists, but we want to protect against username enumeration, return 404 – this removes a route for malicious agents to identify actual users, perhaps prior to a password brute force. They may decide this endpoint is less likely to have the full protections afforded to the login endpoint. This endpoint should also avoid indirect enumeration, for example returning immediately for “user doesn’t exist” and delayed for “user exists but we’re pretending because security”
🔒 If the user exists but the caller doesn’t have permission to see their appointments, return 403 – caller will have to login, or ask someone who has access.
Empty time
Given the selected user exists
🔐 If this user does not support appointments, return 404 – these resources can’t be found.
🗓 If this user does support appointments, but there are none, return an empty list.
note : some APIs will return 204 No Content in this scenario. 204 should only be used for POST or PUT requests to indicate server action was a success, and there’s no data to send back
Empty space
Given the selected user exists And they have at least 1 valid appointment (see business for what “valid” means)
📺 If the appointment has no location (because online conference links are saved elsewhere in the appointment body) then no location property should be returned
❔ If the appointment has no location (because it is unknown) then the location property should be returned with no data (the empty string)
When I was a tutor at university I remember one student who I only saw towards the end of the year. I think computer science was their additional course. They came in after apparently spending the best part of a year learning Java, and sat down to complete their assignment.
It didn’t take long before I was called over to help. Their code wouldn’t compile. A fairly standard console application, with some output. And no semicolons.
I was incredulous, and as a young eejit, I’m not sure how well I hid that. I couldn’t believe someone could have completed the lectures, read the books, and completed the previous 29 assignments without using semicolons.
How could they spend a year on a Java course and not learn anything?
Regrettably, I refused to help them and pointed them towards the obvious and clear error messages that they’d obviously been looking at before they called me over.
I wasn’t going to build it for them. I couldn’t teach them 1 year’s coding in 15 minutes.
And yet, they turned up. They asked for help instead of struggling on. Exactly the things I’d wish for in my new starts when I started leading teams and onboarding staff.
They knew the shape of the solution and they knew where their talents were. If I’d been a little more patient, I could have nudged them gently on. But I don’t know if that would have been enough.
If you are mentoring or leading developers, are you stepping in early enough? Are you praising effort and being vulnerable enough to ask for help? Can you see their strengths and weaknesses? Are you giving yourself enough time with them?
Are you being the senior that you wish you’d had when you were a junior?
There are many editors and extensions for working with connected markdown files. As I am working on multiple devices, it’s hard to find a single editor that works on all of them, and different editors are optimised for different things. In the spirit of UNIX, therefore, I wanted to write a suite of small programs (here coded as sub-commands) that will allow the connection and management of markdown files via automated processes, such as github actions, so that the knowledge base can be updated from anywhere.
This tool was originally created to manage a zettelkasten based markdown powered git repository.
Principles
All outputs should use standard formats, mostly markdown, but some usages may need something more specific.
All subcommands should be independent so that users can pick and choose whatever suits them.
Modifications aligned to particular practices (e.g. GTD, Zettelkasten, bujo) should live in their own subcommands.
This tool should not impose structure on the knowledge base.
Repeated application of a subcommand should only modify the knowledge base at most once, unless external factors apply
External factors include time triggers, update of import files
CosmosDb, in common with other NoSQL databases, is schema-free. In other words, it doesn’t validate incoming data by default. This is a feature, not a bug. But it’s a dramatic change in thinking, akin to moving to a dynamically typed language from a statically typed one (and not, as it might first appear, moving from a strongly typed to a weakly typed one).
For those of us coming from a SQL or OO background, it’s tempting to use objects, possibly nested, to represent and validate the data, and hence encourage all the data within a collection to have the same structure (give or take some optional fields). This works, but it doesn’t provide all the benefits of moving away from a structured database. And it inherits from classic ORMs the migration problem when the objects and schema need to change. It can very easily lead to a fragile big-bang deployment.
For those of us used to dynamic languages and are comfortable with Python’s duck typing or the optional-by-default sparse mapping required to use continuously-versioned JSON-based RESTful services, there’s an obvious alternative. Be generous in what you accept.
If I have a smart home, packed with sensors, I could create a subset of core data with time, sensor identifier and a warning flag. So long as the website knows if that identifier is a smoke alarm or a thermostat, it can alert the user appropriately. But on top of that, the smoke alarm can store particle count, battery level, mains power status, a flag for test mode enabled, and the thermostat can have a temperature value, current programme state, boiler status, etc, both tied into the same stream.
Why would I want to do this?
Versioning
Have historic and current data from a device/user in one place, recorded accurately as how it was delivered (so that you can tweak the algorithm to fix that timedrift bug) rather than having to reformat all your historical data when you know only a small subset will ever be read again.
Data siblings
Take all the similar data together for unified analysis – such as multiple thermostat models with the same base properties but different configurations. This allows you to generate a temperature trend across devices, even as the sensors change, if sensors are all from different manufacturers, and across anything with a temperature sensor.
Co-location
If you’re making good use of cosmosdb partitions you may want to keep certain data within a partition to optimise queries. For example, a customer, all of their devices, and aggregated summaries of their activity. You can do this by partitioning on the customer id, and collecting the different types of data into one collection.
Conclusion
NoSQL is not 3NF, so throw put those textbooks and start thinking of data as more dynamic and freeform. You can still enforce structure if you want to, but think about if you’re causing yourself pain further down the road.
I’ve got a collaborative post coming up on the talks themselves on my employer’s blog but as a speaker and tech enthusiast, I wanted to share a few thoughts on the bootcamp as a whole.
Firstly, I’d like to thank Gregor Suttie who organised the Glasgow chapter under the Glasgow Azure User Group banner.
It’s an impressive feat getting so many speakers on so many topics around the world. Each city is going to be limited in the talks they can offer, but I was impressed by the distances some of the speakers at the Glasgow event traveled.
I would have liked to have been more of a global feel. I know the challenges of live video, especially interactive, but this is an event that would benefit from some of that (or indeed, more speakers participating the the online bootcamp via pre-recorded YouTube videos). I realize an event like Google I/O or Microsoft Build is different in focus, being company rather than community driven, but it felt like a set of parallel events rather than one, so it also feels as if some of the content is going to be lost, and there were a lot of interesting looking talks in other cities that turned up on the Twitter hashtags.
There’s obviously a lot to cover under the Azure umbrella, so it’s going to be hard to find talks that interest all the audience all the time, and it was hard to know where to pitch the content. I aimed for an overview for beginners which I think was the right CosmosDb pitch for the Glasgow audience, but I was helped by the serendipity of coming after an event sourcing talk so I could stand on the shoulders of that talk for some of my content.
I would maybe have liked to see more “virtual tracks” so that it was easier to track themes within the hashtags, whether it’s general themes like “data” or “serverless” or technology/tool focused like “CosmosDB”, “Azure DevOps” or “Office 365”, to help me connect with the other channels and see what content is must interesting to follow up. Although Twitter was a good basis for it, I think there’s scope to build a conversational overview on top, into which YouTube videos, Twitter content, Github links, blog posts, official documentation and slide content could be fed.
As a speaker, the biggest challenge was keeping my knowledge up to date with all the updates that are happening, and events like this do help, but as I’ve been on a project that’s Docker and SQL focused recently, it’s a lot of work on top of my day job to keep in touch with the latest updates to CosmosDb, especially as a few tickets on the project were picked up and moved to “In Progress” between submitting and delivering my talk, and a new C# SDK was released.